On the other hand Google translate is (was?) a clear example that word by word translations aren't intelligent or even decidable. So he's got a point, but far from a general proof.
That doesn't at all address the point of the argument. Given the following
- an assumption that you can generate an algorithm to express behavior indistinguishable from a human's at a given task, and
- an implementation of the algorithm at a macroscopic scale, carried out by individual humans, each executing a small part of the algorithm
then, where in this system would you say that an actual understanding of the task exists? Google Translate doesn't pass the first requirement.
That assumption is too simplistic. If the algorithmic behavior was indistinguishably human behavior, and carried out by humans, it would just be human behavior. Of course machine translation doesn't pass the requirement for human behavior. Nothing does, except humans. And if newer machine translation does pass, I'd say that's humanly accomplished, by use of tools.
Nobody could learn a distinct language from nothing but a dictionary, enough to fool a native speaker. The assumption is ridiculous. The dictionary isn't conscious, either way.
If you have a challenge to the Chinese Room argument, go read and respond to Searle's original essay. There's no point in responding to my TL/DR version -- offered only to correct your initial misapprehensions -- with further misapprehensions.