> The idea that "human-like" behaviour will lead to self-awareness is both unproven (it can't be proven until it happens) and impossible to disprove (like Russell's teapot).
I think Searle's view was that:
- while it cannot be dis-_proven_, the Chinese Room argument was meant to provide reasons against believing it
- the "it can't be proven until it happens" part is misunderstanding: you won't know if it happens because the objective, externally available attributes don't indicate whether self-awareness (or indeed awareness at all) is present
The short version of this is that I don't disagree with your interpretation of Searle, and my paragraphs immediately following the link weren't meant to be a direct description of his point with the Chinese Room thought experiment.
> while it cannot be dis-_proven_, the Chinese Room argument was meant to provide reasons against believing it
Yes, like Russell's teapot. I also think that's what Searle means.
> the "it can't be proven until it happens" part is misunderstanding: you won't know if it happens because the objective, externally available attributes don't indicate whether self-awareness (or indeed awareness at all) is present
Yes, agreed, I believe that's what Searle is saying too. I think I was maybe being ambiguous here - I wanted to say that even if you forgave the AI maximalists for ignoring all relevant philosophical work, the notion that "appearing human-like" inevitably tends to what would actually be "consciousness" or "intelligence" is more than a big claim.
Searle goes further, and I'm not sure if I follow him all the way, personally, but it's a side point.
I think Searle's view was that:
- while it cannot be dis-_proven_, the Chinese Room argument was meant to provide reasons against believing it
- the "it can't be proven until it happens" part is misunderstanding: you won't know if it happens because the objective, externally available attributes don't indicate whether self-awareness (or indeed awareness at all) is present