This feels a lot like Douglas Hofstadter's elaborate explanation in Gödel, Escher, Bach: An Eternal Golden Braid of how computers will never beat the best human chess players.
Every couple of months we get hit with the same wave of AI stories, and every couple of months I post these two links, inspired by ideas from that book.
Godel's Incompleteness Theorem and Turing's Halting Problem.
You can't build a perfect machine, because that would imply understanding reality perfectly.
Ugly reality is going to break your perfect machine, eventually. With long enough time horizons, the probability approaches 1.
When your machine breaks, you are going to need something else, either another, newer machine which can fix or replace it (in Godel's example, the new book of logic/truth), or something dumb like a human wetware, just flexible enough to know the right answer is "unplug the machine and plug it back in"
> If you ask me in principle if it’s possible for a computing hardware to do something like thinking, I would say absolutely it’s possible. Computing hardware can do anything that a brain could do, but I don’t think at this point we’re doing what brains do. We’re simulating the surface level of it, and many people are falling for the illusion. Sometimes the performance of these machines are spectacular.
If we're successfully simulating the surface level of it, the underlying mechanism is (imho) totally irrelevant to the user. If general intelligence is happening, does it really matter if the underlying mechanism is neurons, or transistors, or vacuum tubes?
Here's the fun philosophical question: Do you have a below-surface level understanding of anything? I mean, sure you know that if you do this, something else does that, but do your really understand?
And this is too bad coming out of DH, because in so many ways I think he's one of the best evangelists out there for understanding what computers can do, and for arguing against stuff such as Hubert Dreyfus' claims that computers can't do X.
That's unfair. Hofstadter said he thought it wouldn't happen until AI programs were our equals in general; he emphasized he was guessing and that his colleagues would disagree; and the explanation was of his worldview that led to the guess.