ChatGPT4 arrived anywhere from 5 to 20 years early depending on the "expert" you asked in 2019. Same thing happened with AlphaGo. In 2014, beating a human master in Go was still 15 years off. Then it happened a year later.
Humans are on the cusp of learning that just like everything else so far, there is nothing special or unique about the meat version of intelligence. In fact it's likely hilariously weak compared to purpose constructed general intelligences. Similar to a bird racing a jet, a cheetah against a V12, or horse pulling against a truck.
Does this even contradict the person you're responding to? Jets, V12s, and trucks are operated by people. They're not fully-automated. Alpha Go has not put anyone out of work and it's been a decade now. 15 years isn't that far off. Go was supposed to be such a monumental benchmark because it was widely assumed being good at it was a sufficiently intellect-complete task that mastering it must mean any software that could do so could surely do just about anything intellectual. Winning Jeopardy was supposed to mean that, too, which Watson did even before Alpha Go was invented, yet Watson has also not put anyone out of work even though it's been around for over 13 years now.
Your takeaway seems to be software will always do things ahead of when we expect, whereas my takeaway is we're incredibly bad at guessing what sorts of tests and benchmarks mean software will be able to totally replicate, best, and replace human reasoning and decision-making. Beating games, predicting protein folding, forming reasonable-sounding paragraphs, and scoring high on the LSAT have all turned out to not be enough.
Humans are on the cusp of learning that just like everything else so far, there is nothing special or unique about the meat version of intelligence. In fact it's likely hilariously weak compared to purpose constructed general intelligences. Similar to a bird racing a jet, a cheetah against a V12, or horse pulling against a truck.