The current technology for LLMs is still a rather complex guess the next word algorithm. But this leaves the software with no real room to ever move beyond its training, regardless of how much time or training you give it. You could give them literally infinite processing power and they would not suddenly start developing new meaningful knowledge - it would still be little more than simple recombinations of its training dataset, until somebody gave it something new to train upon.