It's not okay for a human to pirate, plagiarize, violate IP rights and laws, etc.
But I disagree with the underlying assumption that you can anthropomorphize LLMs. Gradient descent and backpropagation don't take place in the brain. LLMs "learn" in the same way that Excel sheets "learn".
Humans are living beings with needs and rights. A person being able to legally squat in a home doesn't mean that a drone occupying property for some amount of time also has squatter's rights, even though you could easily and affordably automate and scale the deployment of drones to live and hide away on properties long enough to attain rights regarding properties all over the country.
> But I disagree with the underlying assumption that you can anthropomorphize LLMs. Gradient descent and backpropagation don't take place in the brain. LLMs "learn" in the same way that Excel sheets "learn".
Backprop doesn't happen in us, but I think our neurones still do gradient descent – synapses that fire together, wire together.
And ultimately, at the deepest level we can analyse, our brains' atoms are doing quantum field diffusion equations, which you can also do in an Excel spreadsheet, so that kind of reductionism doesn't help either.
> Humans are living beings with needs and rights. A person being able to legally squat in a home doesn't mean that a drone occupying property for some amount of time also has squatter's rights, even though you could easily and affordably automate and scale the deployment of drones to live and hide away on properties long enough to attain rights regarding properties all over the country.
Yes, but we can also do tissue cultures and crude bioprinting, so it's a very foreseeable future where exactly the same argument will also be true for living organisms rather than digital minds.
We need to figure out what the deeper rules are that lead to the status quo, not merely mimic the superficial result. The latter is how cargo cults function.
>Backprop doesn't happen in us, but I think our neurones still do gradient descent – synapses that fire together, wire together.
No! Hebbian learning is categorically NOT gradient based learning. Hebbian update rules are local and not the gradient of any function.
Cortical learning is so vastly different from how artificial neural networks “learn” they cannot even begin to be meaningfully compared mathematically. Hebbian learning is not optimization and backprop is not local learning.
Part of the problem of these discussions is a bunch of clueless people talking with authority.
Finally, a good counterargument. I've seen enough terrible arguments to know exactly how you feel — even in specifically just AI.
I have to keep reminding myself that outside of my own speciality, ChatGPT knows more than me despite its weaknesses, so I bet ChatGPT knows more about Hebbian learning than I do.
> We need to figure out what the deeper rules are that lead to the status quo, not merely mimic the superficial result.
Sure, that's an interesting path of inquiry, and one should be free to understand themselves as being no different than a machine if they desire.
But the objective of laws is the benefit of (at least some) humans, not machines covered in lab grown tissue. The process of being human is a big part of what makes us human.
I think you're misapprehending — I mean an entity fully 3D printed out of tissue, no machinery (unless you're counting all biology as machinery, but I think you're not doing that).
I recon bio-printing is now where home computing was in the Apple 1 era, so this is a way off, but it's foreseeable.
> The process of being human is a big part of what makes us human.
Mmm. How much has that process that changed since the ancient world?
I recon bio-printing is now where home computing was in the Apple 1 era
How do you recon that, Apple 1 was Turing complete. We haven't printed life, that would be a tremendous accomplishment.
I think we're closer to Edison inventing a lightbulb as a step to computers being possible. Printing a conscious thing, at all, would be like the transistor. An Apple 1 analogue wouldn't be likely because of the terrible ethics of a "shitty" printed human.
> We haven't printed life, that would be a tremendous accomplishment.
Sure we have, and in multiple different senses.
The ones which matters here are cell culture, which is nowhere near the fanciest bar that's been surpassed in this field, and tissue culture which is somewhat harder but the reason why I recon it's at the Apple 1 level is that a small number of experimentalists are messing around with it using expensive equipment that you can technically buy at home but you need to be well trained to actually use, for example:
No. That isn't printing life, that is taking already living cells, priming and transforming them into something useful. Regardless, I'd count it if we could make an entire living organism this way, but we cant. Creating a working organ is no doubt amazing, and proof that this technology is worth pursuing, but it isn't "printing life" any more than producing life saving drugs is.
In your example you are talking about being able to bioprint a person(they have to be a person to have that right) to squat a property. Bio printing an organ isn't an example of that, it's not even close. Saying that we are anywhere near being able to print a human to squat a property is pretty ridiculous.
> No. That isn't printing life, that is taking already living cells, priming and transforming them into something useful.
Which is absolutely sufficient for the usage I described upthread. In fact, I'd go so far as to say it's mandatory for the point I was making, as — fun though bio-printed werewolves, dragons, and fae would be — my point only works if you get humans out of the process rather than some other species. A bioprinted horse is probably slightly harder than a bioprinted human, but the latter isn't getting any squatting rights.
I could've linked to work on synthetic genomes and nucleotides to give evidence for lower-level creation of live, but they don't matter for the same reason:
My point is that there's a pathway heading off into the distance, and somewhere in the distance but before the horizon can be found bio-printed humans with all the same moral issues we're now just beginning to taking seriously thanks to AI being conversational, and if we had something completely customised, that's cool and all, but it doesn't make anyone go "oh, they're people" the way a humanoid body with human DNA getting off a table saying "hello, nice to meet you" does.
> In your example you are talking about being able to bioprint a person(they have to be a person to have that right) to squat a property. Bio printing an organ isn't an example of that, it's not even close. Saying that we are anywhere near being able to print a human to squat a property is pretty ridiculous.
I wrote "an entity fully 3D printed out of tissue […] is a way off, but it's foreseeable" and compared bio-printing today to a nearly 50 year old computer, and one of my references was a link to a youtube channel where someone is attempting to do a small-scale prototype thing along these lines with a handful of organs made from mouse cells grown in his own lab (and mouse cells rather than human because of the disease risk not because something magic happens with human cells). You're mixing up what I think is foreseeable with what I say already exists, and using the nonexistence of what I think can be foreseen to argue against what does exist.
Sadly, I have seen one. It was a vba script from the late 90s that used a simple dense multilayer network to do some unsupervised pattern classification. The linear algebra tools in vba/excel along with the solvers are all native dll code and the vba itself is all AOT compiled to native, so it typically runs very fast, and for small matrices it beats out numpy by an order of magnitude due to the ffi overhead. Was it the wrong tool? It depends on your constraints, but probably. It did work though.
Whoever operates the LLM, in this case OpenAI, engaged in copyright infringement through the unauthorized modification, reproduction and distribution of content to you.
> sure, but if I use an LLM to write a novel/article, I can be sued in civil court not the LLM
That's function of the legal system, not of the technology. If tomorrow someone made a perfect dolphin-Esperanto translator and proved Dolphins were as smart as humans, you still can't sue a dolphin until the legal system says so.
> anthropomorphize LLMs (...) gradient descent (...) backpropagation (...) needs and rights
You misunderstood me. I was talking about something more fundamental.
Understanding is data compression. They are the same thing. Learning patterns, building mental models, creating abstractions, generalizing, gaining intuition/a feel for something - all the things humans engage in as part of learning and understanding the world - are all acts of lossy data compression.
also if I write and article and quote some "text like this" [1] then that's not plagerism, but if my arguement is that the underlying assumption that you can anthropomorphize LLMs. Gradient descent and backpropagation don't take place in the brain. LLMs "learn" in the same way that Excel sheets "learn". Well, that's plagiarism and it's not allowed and people will get peeved and my career might get damaged.
I await the HN ban with fear..
[1] I'm not even doing referencing - so I am surely an LLM.
But I disagree with the underlying assumption that you can anthropomorphize LLMs. Gradient descent and backpropagation don't take place in the brain. LLMs "learn" in the same way that Excel sheets "learn".
Humans are living beings with needs and rights. A person being able to legally squat in a home doesn't mean that a drone occupying property for some amount of time also has squatter's rights, even though you could easily and affordably automate and scale the deployment of drones to live and hide away on properties long enough to attain rights regarding properties all over the country.