The only reason Sam would leave OpenAI is if he thought AGI could only be achieved elsewhere, or that AGI was impossible without some other breakthrough in another industry (energy, hardware, etc).
High-intelligence AGI is the last human invention — the holy grail of technology. Nothing could be more ambitious, and if we know anything about Altman, it is that his ambition has no ceiling.
Having said all of that, OpenAI appears to be all in on brute-force AGI and swallowing the bitter lesson that vast and efficient compute is all you need. But they’ve overlooking a massive dataset that all known biological intelligences rely upon: qualia. By definition, qualia exist only within conscious minds. Until we train models on qualia, we’ll be stuck with LLMs that are philosophical zombies — incapable of understanding our world — a world that consists only of qualia.
Building software capable of utilizing qualia requires us to put aside the hard problem of consciousness in favor of mechanical/deterministic theories of consciousness like Attention-Schema Theory (AST). Sure, we don’t understand qualia. We might never understand. But that doesn’t mean we can’t replicate.
> Sure, we don’t understand qualia. We might never understand. But that doesn’t mean we can’t replicate.
I’m pretty sure it means exactly that. Without actually understanding subjective experience, there’s a fundamental doubt akin to the Chinese room. Sweeping that under the carpet and declaring victory doesn’t in fact victory make.
If the universe is material, then we already know with 10-billion percent certainty that some arrangement of matter causes qualia. All we have to do is figure out what arrangements do that.
Ironically, we understand consciousness perfectly. It is literally the only thing we know — conscious experience. We just don’t know, yet, how to replicate it outside of biological reproduction.
I think a better analogy would be vision. Even with a full understanding of the eye and visual cortex, one can only truly understand vision by experiencing sight. If we had to reconstruct sight from scratch, it would be more important to experience sight than to understand the neural structure of sight. It gives us something to aim for.
We basically did that with language and LLMs. Transformers aren’t based on neural structures for language processing. But they do build upon the intuition that the meaning of a sentence consists of the meaning that each word in a sentence has in relation to every other word in a sentence — the attention mechanism. We used our experience of language to construct an architecture.
I think the same is true of qualia and consciousness. We don’t need to know how the hardware works. We just need to know how the software works, and then we can build whatever hardware is necessary to run it. Luckily there’s theories of consciousness out there we can try out, with AST being the best fit I’ve seen so far.
> High-intelligence AGI is the last human invention
Citation?
...or are you just assuming that AGI will be able to solve all of our problems, appropos of nothing but Sam Altman's word? I haven't seen a single credible study suggest that AGI is anything more than a marketing term for vaporware.
Their marketing hyperbole has cheapened much of the language around AI, so naturally it excites someone who writes like the disciple of the techno-prophets
" High-intelligence AGI is the last human invention" What? I could certainly see all kinds of entertaining arguments for this, but to write it so matter of fact was cringe inducing.
It’s true by definition. If we invent a better-than-all-humans inventor, then human invention will give way. It’s a fairly simple idea, and not one I made up.
It’s analogous to the automobile. People do still walk, bike, and ride horses, but the vast majority of productive land transportation is done by automobile. Same thing with electronic communication vs. written correspondence. New tech largely supplants old tech. In this case, the old tech is human ingenuity and inventiveness.
I don’t think this is a controversial take. Many people take issue with the premise that artificial intelligence will surpass human intelligence. I’m just pointing out the logical conclusion of that scenario.
Arguably cars have so many externalities they will bankrupt the earth of cheap energy sources. Walking is at least hundreds of millions of years old, and will endure after the last car stops.
Likewise (silicon based) AGI may be so costly that it exists only for a few years before it's unsustainable no matter the demand for it. Much like Bitcoin, at least in its original incarnation.
Cars never got us anywhere a human couldn't reach by foot. It just commoditized travel and changed our physical environment to better suit cars.
I really don't see any reason to believe "AGI" won't just be retreading the same thoughts humans have already had and documented. There is simply no evidence suggesting it will produce truly novel thought.
I don't know whether it's a controversial take or not, but I can't see how if one day the machine magically wakes up and somehow develops sentience that it follows logically that human intelligence would somehow "give way". I was hoping for a clear explanation of mechanically how such a thing might happen.
High-intelligence AGI is the last human invention — the holy grail of technology. Nothing could be more ambitious, and if we know anything about Altman, it is that his ambition has no ceiling.
Having said all of that, OpenAI appears to be all in on brute-force AGI and swallowing the bitter lesson that vast and efficient compute is all you need. But they’ve overlooking a massive dataset that all known biological intelligences rely upon: qualia. By definition, qualia exist only within conscious minds. Until we train models on qualia, we’ll be stuck with LLMs that are philosophical zombies — incapable of understanding our world — a world that consists only of qualia.
Building software capable of utilizing qualia requires us to put aside the hard problem of consciousness in favor of mechanical/deterministic theories of consciousness like Attention-Schema Theory (AST). Sure, we don’t understand qualia. We might never understand. But that doesn’t mean we can’t replicate.