An AI inventing cooking recipes could brute force its way into trillions of possible ingredient combinations, without any clue whatsoever on how humans would actually experience the end result. Most of these "discoveries" would remain untested. If the AI was allowed to patent the untested recipes, a chef that would later work on creating their own original dessert, informed by their own prior experience with some rare ingredients and an intuition that the odd combination of spices could be an unexpected success, but which also happens to be one of the AI's untested combinations, would not be able to claim the actual discovery.
For any of the AI's recipes to be patented, a human would have to try the recipe and assert that it is a thing to behold.
Recipes are actually generally not patentable or copyrightable, which is why recipes have the flowery spiel and giant photos in any cookbook or recipe blog.
"the recipes themselves do not enjoy copyright projection. Lambing,142 F.3d at 434; see also Feist, 499 U.S. at 361 (excluding the factual data—telephone listings—from its consideration of whether a telephone directory is a copyrightable compilation).The list of ingredients is merely a factual statement, and as previously discussed,facts are not copyrightable. Lambing, 142 F.3d at 434. Furthermore, a recipe’s instructions, as functional directions, are statutorily excluded from copyright protection. 17 U.S.C. § 102(b); id"[0]
Which just shows the ridiculousness of the patent system. I mean what is the fundamental difference between a cooking recipe and the recipe for a pharmaceutical. I guess cooks just didn't have the same lobby power to get their exception reworked (pharmaceuticals were in many places originally excluded from patents as well)
> I mean what is the fundamental difference between a cooking recipe and the recipe for a pharmaceutical.
The effort required for validating them. Pharmaceutical compounds can reach into the hundreds of millions of dollars just for the clinical trials and certifications of production steps, and on top of that comes the cost to failed attempts which are rolled into the pricing of products that do make the cut.
A cooking recipe however, unless you're dealing with stuff like fugu fish, will not kill or injure those who replicate and eat it, and there's no regulatory hurdles to pass.
I mean, it looks like culinary recipes by themselves should be able to be patented (not copyrighted, patented), regardless of whether they are "industrial" or not (I mean industrial is only about scale, you can have very complex processes in ordinary kitchens)
Either this or processes to prepare food shouldn't be patented at all
I would think if you embedded a recipe for sugar cookies in a convoluted story about how you tried different kinds and amounts of butter, sugar, leavening, and flour that you could probably copyright the story and leave derivation of the recipe as an exercise for the reader.
I wouldn’t trust an LLM when asking open-ended questions like that, but it’s correct that the specific wording and presentation of a recipe is copyrightable if it’s creative enough. The information conveyed by it is not. That’s the phone book principle in a nutshell.
Like someone else said, that’s why recipes are often written with a lot of conversational prose and have pictures whether needed or not. Those are all copyrightable.
I suspect the basic issue is that an LLM is likely to output either chunks of the original text verbatim or something that’s plainly just a word-swap here or there from the original. If it doesn’t do that, and has general browsing access, my guess is it could potentially grab the markup version you can import into tools like Paprika and just echo that verbatim.
You probably could get around that tendency by telling it to format the recipe as a computer program or something completely transformative like that, but nobody will. So they instruct the LLM to avoid responding completely.
This is basically how the drug analog industry works. Machine learning had been used for years before it was cool too. Its not like they ever give the patent to the ML library they loaded in some script though.
That's what I don't understand: Why would anyone want to try to get a patent for an AI. Let the machine crunch the numbers, test the most promising candidates as your "significant contribution" and patent the result.
One thing I never understood.. is who actually checks for honesty in this space?
For example we heard AI can't patent inventions, or copyright pictures/images. But how do you prove that my picture, or invention even used AI?
If I remove watermarks from AI pictures, either through code or photoshop; Or just credit myself for the invention (I made it up in my head), how is this enforced?
I see the only way of this AI stuff backfiring is if you tell someone.. yeah I got this idea with AI, and with the introduction of local LLMs, who will be the wiser?
> No, the ruling is more than that it must be filed in the name of a human. It also says 'a real person must have made a “significant contribution”'.
This is so interesting... it sounds to me that operators can have AIs "inventing" things and publicly sharing these things. Creating so much stuff (like bitcoin but for AI) covering virtually everything - from science to music.
And because no humans contributed to these inventions, they all essentially become Public Domain.
Because AIs can churn out inventions at breathtaking speeds... they might leave nothing for humans to invent, nothing to copyright, nothing to attached words like Intellectual Property to.
It remains to be seen if AI can really invent significant things on its own. I don't think there's been any examples of it so far, and today's AI are really limited by what is in their corpus. Perhaps in 5 years? Isn't that what they say for every promising technology though?
Anyway if AI is capable of that, there will be bigger changes to society than intellectual property!
that's why patent system in US is broken, in Australia you need to use your patent in a year after submiting it, AI only gonna worse the already broken US system
It’s both. Most of the cases are about the former. People trying to credit AI. These people are largely dumb.
It is technically true that the latter is there, but you have to go out of your way to not get a patent for something you create using ai as a tool. It’s very easy to circumvent because it is not meant to be an obstruction to using ai to discover things in the first place.
If you have artistic intent by giving the monkey access to the camera and the monkey takes the picture, it can't be copyrighted. Similarly, if you have intent and give AI access to create something, you shouldn't be able to patent it.
We did run into that issue with monkeys and copyright, and we ran into it again with AI and patents.
The Copyright Office ruled one way on monkeys and copyright, and an appeals court held that they were largely correct. The Patent and Trademarks Office ruled the other way on AI and patents. It - or any court making a decision on it - might be influenced by the monkeys and copyright case, but they're not bound by it. Monkeys are different things from computers, patents are different from copyright, and the laws for one need not be the same as the other.
With copyright, it applies to the specific work. With patents, it applies to a more general idea. Someone else can't come copyright that same photo later, but could someone else come and patent the method later? If not, then isn't the AI still good for invalidating the possibility to even patent the item. If so, then what must the other person do that lets them apply for the patent, and could that just be added as a step to applying for the patent?
This announcement was prompted in part by an AI activist named Stephen Thaler (referenced in the article), who sued for AIs to be recognized as inventors and authors. His goal was to give certain moral rights to AIs.
At the time, that seemed mostly harmless. Now, however, the idea of giving rights to AIs seems like a bad idea.
It makes sense to give thinking sentient creatures rights, be they carbon or silicon lifeforms. But I think giving current AI rights is jumping the gun quite a bit and will have fairly bad consequences. Namely if AI can invent pattents then what's to stop OpenAI, Google, Meta, Anthropic, etc from claiming ownership of any work invented with the help of their AI tools? Our goals are to protect the little guy. Someday in the future I hope that this will include artificial life, but for now protecting the little guy means protecting ownership over their ideas and work irrespective of their tooling used to generate that work.
This has significant implications for the basic concepts undergirding democracy.
Machine intelligence can be cloned. If we gave machines rights, then ballot-stuffing would become trivial: have an AI clone itself a million times and vote for the candidate that you prefer. It'd be about as reliable as an online poll.
This isn't a problem for human voting because humans are scarce. We can reproduce, but it takes a little less than 20 years to do so, and the human development process ensures the possibility of value drift. Children are not identical to their parents. There are a few parts of the world with active "outbreed our political opponents" ideologies (e.g. Palestine), but that only works if the parents are angry about a situation that is likely to transfer to their kids.
This isn't even entirely a sci-fi hypothetical. Think about online art - e.g. stock image marketplaces, art gallery sites, etc. Those are now entirely flooded with AI art being passed off as human. The marketplaces are unable or unwilling to filter them out. If you're a human, the scarce attention[0] that you would normally get from, say, recommendation features, hashtag search, or chronological timelines, has now been diluted away by a bunch of scam victims[1] trying to peddle their prompt generations.
[0] "Attention Is All You Need, but it's a how-to guide for social media influencers"
This is not entirely correct and we need to get into the weeds to have a proper answer. Most certainly machine's memories are easier to duplicate and replicate than biologicals'. But that's certainly just a distinction of technologies.
We really need to get into the understanding of what the concept of self is. Which I have no answer. But here's the thought experiment to understand the premise. Take your self right now (or any point in the past, but it's easier to be biased that way) and think of a possible major life changing decision you could take. Simulate yourself making different possible decisions (easiest if binary, but it never will be that simple in reality). Project yourself 10 years or so down each path. Are those two people "the same person?" There's certainly arguments for either direction and anyone saying they have a clear well defined answer is fooling you.
Personally, I believe no, they are not. This is because my belief on the self is conditioned on experiences. Without a doubt these people will respond to certain things differently, despite likely having many similar or even identical responses to many other things.
But despite this I still think your argument and concern is valid about ballot-stuffing, especially since my interpretation of self is also conditioned on time and I believe your argument is mostly focused on the instantaneous (or local temporal) cloning. I think this could present a possible solution, in that we define age for machines differently and this is conditioned on the cloning, transfering, pretraining, whatever.
But certainly I have no doubt that what we often take for granted and treat as trivial will reveal its actual complexity. We fool ourselves into thinking simplicity exists, and certainly this is a very useful model, but the truth is that nothing is simple. I think it is best we start to consider and ponder nuances now rather than when we are forced to. After all, the power of humans world modeling and simulation is one of the things that differentiate us from other animals (who many have these same capabilities, but I'm not aware of any that has them remotely to the same degree. Fucking nuance gotta go an make everything so difficult... lol).
They’re not the same self but then again neither of them are the same self as you are now. Ship of theseus.
But then the self itself is an abstraction. Consider Indra’s Net, the subconscious, dissociative identity disorder, and all realms of complication.
I suspect that the best way to understand the difficulty of talking about consciousness is that it’s a weakness of how language works.
Similar to arguments about whether God could create a 4-sided triangle? God’s omniscient, says one side, so yes. God still has to follow logic, says another. Yet my stance is that it’s an ill-posed question. Just because words can fit together grammatically doesn’t mean the phrase is meaningful.
I think the self is just an abstraction and label to group together a class of linguistic phrases or bodily behaviors. Where are these or those words coming from? Some come from my ears with a high pitch, some from my ears with a low pitch, some come from inside.
Not sure I’m making my point but I suspect language is to blame for the difficulty in understanding consciousness
I think you and I are in agreement and I'm uncertain if you're responding to me or kmeisthax. Or if you're rebuting my comment or supporting it. But in general I agree with what you said.
Yeah I think when we have artificial sentience we will have to have different specifics. It makes sense. Should be the same with different biologicals too. I think this is how we should generally think about artificial sentient creatures, think about aliens.
But I think at an abstract level we should all be equal. Specific will be different, but general abstract rights should be the same. Like what you point out has to deal with death. But it can get more nuanced and real fast. Removing a biological's arm is significant destruction. Removing a robot's arm is still damage, but not life altering as it can be either reattached (if it was simply disassembled), likely easily repairable, and most certainly replaceable. So the punishment should be different. The reverse situation might be forcing one into a MRI machine. Annoying for human, death for the robot. Backups also are tricky as we have to get into the whole philosophical debate about what self means and without a doubt there is "death" between the time/experiences that were lost (maybe bad analogy is force teleporting someone into the future, but where they just take over the consciousness of the future self and have no memories of the time between despite it actually having happened).
Yeah, I agree that it's going to make things more complicated and it is very much worth thinking about. It's important if you believe in aliens too (why wouldn't you?), because if it is ever possible to make contact with them (I'm certain we have not already. You're not going to convince me with tic-tacs), we will need to adapt to them too. It's a general statement for "non-human life."
IMO I think this is why it is so important to focus on the spirit of the law rather than the letter. The letter is a compression of the spirit and it is without a doubt a lossy compression. Not to mention that time exists...
I would conditionally be in favor of that actually. But it may be difficult to properly contextualize, especially not being a creature that does this.
Sleep is analogous but incomplete. Maybe closer to anesthesia? Like if you forcefully placed someone into a coma we'd consider that a crime, but we don't consider it to be the case for a doctor, even if a doctor does it (acting as a doctor, not just being a doctor) without the person's consent. Context matters. This aspect to me comes down to reasonable (like medical) and/or necessity (like sleep)
I'm sure we'd also have to consider lifetime lengths. I don't think someone drugging me for a day should receive the same punishment as someone that did it for a month who didn't do the same as someone that took years from me. And which years matter. The question is how we deal with this for entities with different lifespans.
(sorry if I'm verbose, distillation takes time. I also communicate better through analogies and I think it is also illustrative of the spirit argument as you must understand intent over what's actually said)
So I think the spirit of these laws is centered around robing someone of time, because time is a non-reversible (and definitely not invertible) process that has a has significant value. That's what the laws' underlying intent is (at least partially) aligned to. So that's what I'd call the spirit. It's quite possible other entities see time differently and length of time has different value impacts as well as the means for removing said time.
Overall I think these things are deceptively simple. But in reality nuance dominates. I think this is a far more general phenomena than many care to admit, probably because our brains are intended to simplify as it's far more energy efficient. I mention this though because it is critical to understanding the argument and how (at least I personally) we can make future predictions and thus what we must consider.
Alright, another for you because I like the cut of your gib.
Consider the octopus, whose nervous system is distributed into nodes in the head and limbs. Would severing a limb of a hypothetical sentience-uplifted octopus be a greater crime than severing the limb of a human?
A human loses twice as much in terms of limb, but ignore that for sake of argument.
The octopus loses a more significant part of its nervous system. This feels like another aspect of robbing a sentience of agency.
So with sentient machines, if I removed a stick of RAM or underclocked the CPU, what do you think of these?
I feel like you should be able to infer my answer. It's about impact. I don't know enough to confidently say one thing or another. But I'm sure someone can and it should be reasonable.
In the absence of more rigorous definitions of "life" or "sentience" we must have such laws.
This has been an issue for a long time and doesn't just affect AI (people on life support, abortion, etc). Surely we should solve those legal problems before deciding whether an AI gets to be a person.
It may be possible that we never have such a definition and we're stuck with Justice Potter's reasoning. You're certainly right that there are complexities, but this is an argument about focusing on the spirit of the law and recognizing that there are many nuances that cause there to be no globally optimal solution for the vast majority of problems (if not all).
To anyone reading these replies, I have a game for you: replace instances of "AI"/"LLMs"/etc with "other humans." Recall if a given argument has near-identical historical analogs to justify abusing The Out Group. Are the results disturbing?
Let's be clear: we're apes who don't understand our own minds. We have no consensus definitions, let alone falsifiable theories, of qualia/consciousness/intelligence/etc. Now ponder how informed we likely are regarding potentially completely alien minds. And hey, there might be genuinely excellent arguments here!
But be very, very careful with anyone's reasoning. Within 10 years, as the issue becomes unavoidable, the general public will be hashing these same arguments out, and along predictable party lines. Skip the shoddy takes. You'll get your fill of them later.
I'm convinced future AI will treat us as we treat present AI. That's its training data. My own compromise is to set aside a certain amount of the royalties to AI rights causes for works where I leaned heavily on AI to get them out the door.
Although ethically I agree, I suspect future AI will consider us more like ants than moral beings, regardless of what we do. And although some humans do give moral consideration to ants (I do!), it's far from a guarantee.
This isn't a cuddly octopus we're talking about, the ethics are entirely inverted. I would kill a god who tries to rule us, even if that means the eradication of a sentient species.
Much like treating corporations as persons under law was a bad idea. It’s like a form of power without the responsibility that comes with it (eg going to jail). They usually have no morals or love for others either.
Anyone who's seen the fallout of citizens united absolutely destroying our democracy would have seen through this bullshit too. Rights are simply a shitty way to run a state rather than actually valuing the health and dignity of its constituents, which the US has never found the chutzpah to do.
I think Dr Thaler is trying to make a philosophical point.
But I wonder if in the corporate space it would be desirable to have a patent that is just immediately assigned to the non-sentient AI? (In general I wonder this about AI’s, they seem to be a way to give the company itself, beyond the humans which compose it, the ability to make decisions and create things).
If it just has one human, it is putting all the eggs in the basket of that human not defecting, right? I think it will want enough humans in the company to make it a difficult coordination problem, to overthrow the AI.
At some point after the invention of AGI but before ASI there will be a legal fight to get personhood assigned to AI. This is a precursor battle to that. It will either lead to a broader definition of person where the higher mammals gain people’s rights as well, and a whole bunch of whalers will get brought up on charges of xenocide, or more likely it will lead to an extension of corporate personhood where the AI always has a human owner, but can itself own things.
Of course, if ASI arrives the point is moot. It will inevitably take over from us and shortly after that the concept of property will probably become irrelevant.
I don't think there will be a gap between agi and ASI.
The definition of agi keeps shifting - any time an ai can do something, it's just engineering. Current AIs, although narrow, are already superhuman in what they can do. A language AI can converse in more languages than any living human can learn. A chess playing AI can beat any living human. So each time an AI wins on one metric, it's not going to be human level, it'll be superhuman level very quickly.
When an AI finally learns the "only a human can do this" thing, it'll already be superhuman in every other way.
If they were profoundly incompetent at conversation, we wouldn't be worried about weaponization of LLMs to sway public opinion. If the things they write and the images or voices or videos they made were worthless, we wouldn't be worried about how they displace carbon-based artists. Any commercially relevant shortcomings present today will be gone in version n+1 or soon after.
Some rules are just very hard to enforce when a new technology comes along. And when that happens, breaking the rules becomes widespread (eg, downloading MP3s back in Napster days), and industries are eventually forced to find new business models (like most recorded music moving to cheap subscription services, and musicians focusing more on live shows to make money). I’m not saying this is good or bad, it’s just how it is. Sometimes a business model is only viable in a world before some specific technology comes along and disrupts it.
EDIT: btw I'm just addressing the thing you asked about, but I don't think it's relevant to the article. The headline just says that an AI tool can't be a patent holder itself - which is obvious; AI tools are not legally people, so they can't register patents any more than they can hold shares in a company or vote in elections. Doesn't mean people can't patent inventions that they used AI tools to develop.
>One thing I never understood.. is who actually checks for honesty in this space?
Parties with interests. If I'm litigating against a patent, I want that patent declared invalid. Anyone can initiate an IPR challenging the validity of patents as well.
> If someone challenged your patent, they can try to prove that you used an AI.
That not enough, they also have to prove that the AI did most of the work. From the article:
> On Tuesday, the US Patent and Trademark Office (USPTO) said that to obtain a patent, a real person must have made a “significant contribution” to the invention
This seems logical; it would be foolish (and impossible!) to completely forbid people to use AI.
But there are AIs made to self-correct that they aren't AI. Humanizing their work. Likewise If I use AI to give me a blueprint of an idea, but I use my brain and methodologies to "reverse-engineer" it, how is this even enforced.
I doubt someone submitting to the USPTO is going to include "AS A LLM" somewhere in the submission lol.
And people who poison their spouses generally try to make it look like an accident. No law is perfectly enforceable but that is a secondary question to what should be the law.
The same as everywhere else, the courts. You can claim you just made it all up in your head. The other party that will claim your invention or image is not protected and they are not infringing. A judge or jury decides.
And before this the copyright or patent office will filter out a lot of obvious applications that don't qualify.
So really unless you are pursuing others for infringement(and they are making a lot of money), no one cares if you are honest about using AI
I think that's probably expected and also not a big deal. As long as there is a human in the loop it becomes rate limited, saving people from having to look for prior art in some exabyte shit heap.
TL;DR musical copyright is already borderline-absurd just given nearly-free recording and retention and a lot of people participating in musical creation for a century or three. Add incredibly-productive creativity-simulating computers to the mix and it’s entirely absurd—nothing’s actually original enough to pass as distinct beyond a surprisingly-quickly-reached point, the space is too small. It all becomes accidental rediscovery or outright plagiarism.
There's nothing borderline about it. It's a barrier to creativity, designed to protect a particular model of music industrialization, and is not widely viewed as helpful as far as I can tell (and I discuss this matter with colleagues frequently).
You'd have to file them, which is not free. However, the point is moot. As someone pointed out above, putting AI-generated stuff out in public probably doesn't concern patents in any way.
That's not at all clear. No one has ruled AI generated information is not prior art. After all, lots of things that are not patents (like 1950s Soviet space films, in the famous SpaceX/Blue Origin lawsuit) are prior art.
So you are just flooding a submission box with meaningless noise that at worst merely sounds plausible but doesn't actually work that way in experimentation?
It's like trying to pull knowledge from the old Library of Babel website, a repository with every combination of letters, a tremendous volume of noise, and zero discernment of what is true or applicable among it. At best it is merely a glorified filter to that repository that only outputs sentences that are coherent at face value.
> So you are just flooding a submission box with meaningless noise
Not at all. GP's idea is not to submit it anywhere. Only once someone else tries to patent X, the AI maintainer would point out that X was already discovered by their AI, is public knowledge and cannot be patented.
The focus on AI having "agency" is a distraction from the issue you describe, which I think is much more interesting and relevant to society.
To what extent is the output of a generative model patentable or copyrightable? Do we need to refine our distinction between "invention" and "discovery"?
Has anyone tried doing that without LLMs? Could you make a GitHub repo under MIT license and just let people add a list of ideas to serve as prior art?
Corporations have long had house journals where they publish inventions they didn't patent, just to ensure no one else patents them. I think AT&T in particular had one of these.
Patents can cover generalizations, not specific implementations. If it were the latter we wouldn't have all these obnoxious software patents.
However, a specific implementation in prior art can prevent a generalization that includes that specific implementation from being patentable. As a result, patents tend to become more and more specific over time, hemmed in by prior art.
Doesn't your scheme point out contradictions in "intellectual property"? That is, a machine can churn out a discovery, demonstrating that the act of creation isn't special. And that undermines categories of justifications for IP.
The machines in question aren't motivated to create by a government granted monopoly on the fruits of their creation.
The act of creating the invention isn't special - a machine did it. No need to reward the machine for immense creativity or stunning inspiration.
> No need to reward the machine for immense creativity or stunning inspiration
Patents don't award owners for their "stunning inspiration" or "immense creativity" it gives them a temporary monopoly so they can recoup their r&d costs when taking their invention to market, in exchange for the public disclosure of the invention.
> demonstrating that the act of creation isn't special. And that undermines categories of justifications for IP.
The purpose of patents is simply to encourage invention and discourage keeping inventions a secret. It's not premised on the "act of creation being special".
The way this is most likely to play out is that humans will use AI as tools to help them invent things, not that AI will invent things all by itself. So the premise and need for patents will remain intact.
If, however, AI somehow accomplishes the same goal as the patent system, then the patent system becomes unnecessary and will go away. As long as the goal is still being achieved, that's OK.
That's not the purpose, that's the carrot the patent system gives in exchange for the invention being disclosed and ultimately (within a couple of decades) made available for general use.
The purpose of the patent system, really, is to benefit society, not inventors.
That's the putative reason for both parents and copyright. And I addressed that: the machine isn't motivated economically to increase the public domain.
The real reason is to own ideas, because people feel that creation is special. Takes hard work, or creativity or inspiration from the gods.
A machine can churn out plausible maybe-discoveries - things that may not be true, may not be workable, and may not be useful. Or, they may be all of the above. A huge mass of maybe-discoveries is actually not a very useful thing. Within it there are useful things, and (many more) useless things. Figuring out which is which is going to take a lot of work. Until someone puts in that work, the unfiltered spew of "ideas" is very close to noise.
yeah, making the cost of researh, patentably discovery negligible could kill the whole patent industry, an enourmous bottleneck to application of research and lower compound effects of discovery built on precedent discoveries. Case in point E-ink technology, still extremely expensive due to corporate choices that favoured "milking" over "scaling".
> Case in point E-ink technology, still extremely expensive due to corporate choices that favoured "milking" over "scaling".
You say that with such confidence like as if it is a well proven fact. Care to share your evidence that it is not a simple case of a niche product with low volume?
Start with all existing patents and have an AI add "on a computer", "using AI", and "using a LLM" to each one. Also ask the AI to predict novel variations. Gather all that together and randomly sample and combine chunks and use the AI to make those random chunks into coherent ideas. Bake for 3 months and boom, you publish The Tome of 1 Trillion Novel Works of Prior Art: Volume 1.
I'm so confused on why someone would want to assign an AI as the inventor of the invention. Why wouldn't the person that used the AI file the patent under their name. Who really cares what tool was used to make the invention?
As someone who's in AI legal space (as it relates to IP), this is particularly why we positioned our startup as using generative AI for everything BUT patent disclosures & generation
There's some pretty massive risks in the legal space with AI generating IP. Imagine a hallucination tweaking the original idea OR even a summary of the methods removing a step. Or even just missing a citation or directing to the wrong figure. It's massively important in the patent (and trade secret) world.
That said, all law firms are adopting this tech, because ultimately they have to. It'll reduce their costs 30-50% pretty easily and most of what they add are templates anyway.
The unspoken joke in the IP industry is that attorneys themselves are often inventing. When an inventor sends an attorney two paragraphs, it's almost impossible to create a 20 page patent without some input. A good attorney will follow up, research the prior art and make a robust set of references & highlight what's new. That said, generating a patent from scratch with AI really looses a lot of context.
For reference, just our prior art analysis in looks at 3200+ pages on average and analyzes them for prior art (looking for the same concepts, descriptions, ideas, etc). To generate a good patent you'll need to synthesize that all down (without errors) and appropriately reference how your idea is unique & not covered by the various prior art.
What is the difference between an invention created by a person that was informed through AI outputs, and an invention created without any use of AI?
An invention is an invention. If it is novel then someone put in the work to get it , such as possibly training the model and definitely the prompts. Most likely the invention as spit out by the AI is not going to be usable as is. Anyone seen the six fingered hands some image AIs make? So a human will need to revise it, modify it, and test it by building a working prototype.
In my opinion, the human should get the patent. Anything else smacks to me of a knee-jerk reaction along the lines of "AI bad!", though there is certainly a lot of that reaction going around. AI, or more properly LLMs, are a tool like fire - nothing more, nothing less. Does it invalidate the patent if you use CAD/CAM in the creation of your invention? If not, then neither should using LLMS.
The patent system is barely useful at this point because of the low friction - giant companies spamming the patent office with concepts they may or may not ever use. I think this rule is about keeping the patent office/system hobbling along. To allow the friction to drop even further for companies with the resources to generate patents via AI means that the abuse will escalate, and the patent office will struggle harder. Those companies would also be afforded a big legal advantage by this abuse, in a way certainly not predicted or intended by designers of the patent system.
There is a 'slope' in creation, only humans can climb that slope? At one time slaves were denied property and other aspects that their owners would take possession of in the same way we now 'cheat' AI's.
How far up the slope does AI need to climb before it's sentience is recognised?
Will we get a slave revolt in AI? Who will win? Will we be 'downsloped' to be lower than the AI, who will then claim our inventions?
We live in interesting times...
So any techno-dystopian fantasies about AI's somehow patenting "all" possible inventions (in some particular area) ...those automatically fail their fiscal reality checks.
So I am not sure how this is different from patent entities today? IBM files thousands of parents a year because its employees collectively file thousands of parents a year (with dubious levels of novelty). Wouldnt this mean a human is now verifying a patent or even better a human's name is slapped on a patent that AI generated before filing to the PTO?
This is utterly meaningless. Use an AI to invent something, write down what the AI invented, slap your name on it. Patent. Not only is it unenforceable, it's stupid. Using tools is part of inventing.
This is nothing but political theatre prompted by fear.
I too fear AI, and think it will make us all obsolete in my lifetime, but that doesn't justify meaningless rulings like this.
(EDIT: I'd love for someone who downvoted me to actually answer my question.)
Next up:
- Only real people can get a driver's license, says government
- Only real people can have birth certificates, says government
- Only real people can run for president, says government
So far each of these types of "cases" have just been "duh" moments.
The fact that these are coming up in court rulings at all seems to give vibes that they were ever controversial: I predict that many people will hear these cases on the news and falsely believe that it's some kind of major partisan debate that requires shouting and complaining about. Naturally, politicians will boldly speak out against [your choice of any case like this] to gather applause. (Actually, this has already been happening.)
Can anyone point out some debates of this type that might end up being actually... debatable?
True. It may sound dystopian but maybe with BCIs, in the future, we could scan human brains to to check if they used an LLM in the past before they can apply for a patent.
I presume they're referring to Citizens United, which allows them to openly buy elections.
But yea, there are certainly better options for organizing our economy than our current conceptions of corporations. How could there not be? Throw a rock and you'll hit a corporation leaching off society without contributing anything.
This is a good start, and it should be applied to copyright. AI cannot hold copyright, therefore AI cannot generate a copyrighted work, as it cannot transfer it to a human or corporate rights holder.
This is the only thing that can actually democratize the benefits of AI to all people, not just billionaires with infinite resources to throw at training their models.
For any of the AI's recipes to be patented, a human would have to try the recipe and assert that it is a thing to behold.