It will be very alien to us - "it's just predicting the next word" is what I have heard repeatedly said about ChatGPT.
First, AI is far more than just chatgpt, don't presume this is the same thing happening everywhere.
Second, The LLMs are all reasoning machines drawing on encyclopedic knowledge. A great example I recently heard is like a student parroting names of presidents to seem smart - it isn't thinking in the exact manner that we do but it is applying a reasoning to it. Chat GPT may be doing something akin to prediction, but it is doing it in a manner that is exposing reasoning. As the parent mentioned, our own brains use networks that refine over time with removal, and a huge number of our behaviors are "automatic". If you go looking for "consciousness" you may never find it in a machine, but it doesn't really matter if the machine can perfectly mimic everything else that you do or say.
An unfeeling unconscious yet absolutely "aware" and hyperintellgent machine is possibly the most alien we can fathom, and I agree there is no "end game" there is likely no mathematical limit to how far you can take this.
Human minds also tend to predict the next word. And we still don’t know how intelligent behavior and capability to model the world emerges in humans. It’s quite possible that it is also based on predicting what would happen next and on compressed storage of massive amounts of associative memory with attention mechanisms.
The books are not alien to us. A mind that's born out of compressing them might be an entirely different thing. Increasingly so as it's able to grow on its own.