>And our AI is not an alien to us. It’s just our books compressed.
This is a potential mistake.
Do you have pets? Even if you don't you'd generally consider animals to be intelligent, right? But even the smartest animal you've seen is far dumber than the average human being (except visitors at national parks operating trash cans). I mean you can teach many kinds of animals quite a bit, but eventually you hit physical limits of their architecture.
Even humans have limitations. You can learn the most when you're a child, and you do this in an interestingly subtractive method. Your brain is born with a huge number of inner connections. As we learn and age we winnow down those connections and use a far lower portion of our energy output on our mind. Simply put we become limited on how much we can learn as we age. You have to sleep. You get brain fog. You end up forgetting.
With A(G|S)I there are many questions unanswered. How far can it scale? Myself I see no reason it cannot scale far past the human mind. Why would humans be the most optimized intelligence architecture there is? It seems evolution would create that. When you ponder on the idea that something could possibly be far more intelligent than you, you have to come back to the thought on your pets. They live in the same reality of you, but you exist on an entirely different plane of existence due to your thinking abilities. What is the plane of existence like for something that can access all of humanities information at once look like? What is the plane of existence like for something that can see in infrared, ultraviolet, in wireless, in context of whatever tooling it can almost instantly connect to and work with, something that can work with raw data from sensors all over the planet feeding data back to it at light speed?
Now, you're most likely correct in the sense before we get some super ASI that is far beyond us, we'll have some AGI just good enough to power someone greedy and cause no shortage of hell on earth. But if somehow that doesn't happen, then we still have the alien to contend to.
It will be very alien to us - "it's just predicting the next word" is what I have heard repeatedly said about ChatGPT.
First, AI is far more than just chatgpt, don't presume this is the same thing happening everywhere.
Second, The LLMs are all reasoning machines drawing on encyclopedic knowledge. A great example I recently heard is like a student parroting names of presidents to seem smart - it isn't thinking in the exact manner that we do but it is applying a reasoning to it. Chat GPT may be doing something akin to prediction, but it is doing it in a manner that is exposing reasoning. As the parent mentioned, our own brains use networks that refine over time with removal, and a huge number of our behaviors are "automatic". If you go looking for "consciousness" you may never find it in a machine, but it doesn't really matter if the machine can perfectly mimic everything else that you do or say.
An unfeeling unconscious yet absolutely "aware" and hyperintellgent machine is possibly the most alien we can fathom, and I agree there is no "end game" there is likely no mathematical limit to how far you can take this.
Human minds also tend to predict the next word. And we still don’t know how intelligent behavior and capability to model the world emerges in humans. It’s quite possible that it is also based on predicting what would happen next and on compressed storage of massive amounts of associative memory with attention mechanisms.
The books are not alien to us. A mind that's born out of compressing them might be an entirely different thing. Increasingly so as it's able to grow on its own.
This is a potential mistake.
Do you have pets? Even if you don't you'd generally consider animals to be intelligent, right? But even the smartest animal you've seen is far dumber than the average human being (except visitors at national parks operating trash cans). I mean you can teach many kinds of animals quite a bit, but eventually you hit physical limits of their architecture.
Even humans have limitations. You can learn the most when you're a child, and you do this in an interestingly subtractive method. Your brain is born with a huge number of inner connections. As we learn and age we winnow down those connections and use a far lower portion of our energy output on our mind. Simply put we become limited on how much we can learn as we age. You have to sleep. You get brain fog. You end up forgetting.
With A(G|S)I there are many questions unanswered. How far can it scale? Myself I see no reason it cannot scale far past the human mind. Why would humans be the most optimized intelligence architecture there is? It seems evolution would create that. When you ponder on the idea that something could possibly be far more intelligent than you, you have to come back to the thought on your pets. They live in the same reality of you, but you exist on an entirely different plane of existence due to your thinking abilities. What is the plane of existence like for something that can access all of humanities information at once look like? What is the plane of existence like for something that can see in infrared, ultraviolet, in wireless, in context of whatever tooling it can almost instantly connect to and work with, something that can work with raw data from sensors all over the planet feeding data back to it at light speed?
Now, you're most likely correct in the sense before we get some super ASI that is far beyond us, we'll have some AGI just good enough to power someone greedy and cause no shortage of hell on earth. But if somehow that doesn't happen, then we still have the alien to contend to.