> (this is the part where somebody stumbles in to say "dOn'T hUmAnS dO ThE sAmE tHiNg????")
As a frequent "your stated reasoning for why llms can't/don't/will-never <X> applies to humans because they do the same thing" annoying commentor, I usually invoke it to point out that
a) the differences are ones of degree/magnitude rather than ones of category (i.e. is still likely to be improved by scaling, even if there are diminishing returns - so you can't assume LLMs are fundamentally unable to <X> because their architecture) or
b) the difference is primarily just in the poster's perception, because the poster is unconsciously arguing from a place of human exceptionalism (that all cognitive behaviors must somehow require the circumstances of our wetware).
I wouldn't presume to know how to scale furbies, but the second point is both irrelevant and extra relevant because the thing in question is human perception. Furbies don't seem alive because they have a simple enough stimuli-behavior map for us to fully model. Shoggoth mini seems alive since you can't immediately model it, but is simple enough that you can eventually construct that full stimuli-behavior map. Presumably, with a complex enough internal state, you could actually pass that threshold pretty quickly.
> the poster is unconsciously arguing from a place of human exceptionalism
I find the specifics of that exceptionalism interesting: there's typically a lack of recognition of their own thinking process as having an explanation.
Human thought is assumed to be a mystical and fundamentally irreproducible phenomenon, so anything that resembles it must be "just" prediction or "just" pattern matching.
It's quite close to belief in a soul as something other than an emergent phenomenon.
I disagree with your response, because you are confusing the difference between modeling human behavior and being human.
According to you a video of a human and a human are the same thing. The video is just as intelligent and alive as the human. The differences are merely one of degree or magnitude rather than ones of category. Maybe one video isn't enough, but surely as we scale the database towards an infinite amount of videos, the approximation error will vanish.
C'est ne pas une pipe, the map is not the territory, sure.
But I disagree that my argument doesn't hold here - if I re-watch a Hank Green video, I can perfectly model it because I've already seen it. This reveals the video is not alive. But if I watch Hank Green's whole channel, and watch Hank's videos every week, I can clearly tell that the entity the video is showing, Hank Green the Human, is alive.
As a frequent "your stated reasoning for why llms can't/don't/will-never <X> applies to humans because they do the same thing" annoying commentor, I usually invoke it to point out that
a) the differences are ones of degree/magnitude rather than ones of category (i.e. is still likely to be improved by scaling, even if there are diminishing returns - so you can't assume LLMs are fundamentally unable to <X> because their architecture) or
b) the difference is primarily just in the poster's perception, because the poster is unconsciously arguing from a place of human exceptionalism (that all cognitive behaviors must somehow require the circumstances of our wetware).
I wouldn't presume to know how to scale furbies, but the second point is both irrelevant and extra relevant because the thing in question is human perception. Furbies don't seem alive because they have a simple enough stimuli-behavior map for us to fully model. Shoggoth mini seems alive since you can't immediately model it, but is simple enough that you can eventually construct that full stimuli-behavior map. Presumably, with a complex enough internal state, you could actually pass that threshold pretty quickly.