This seems like a weak argument to me. We don't really understand how human intelligence works yet so how can we claim that computers will never realize similar intelligence? We don't know for a fact that human intelligence depends on these things.
I'm personally skeptical that we'll see AGI any time soon but I don't think we know enough to say this definitively.
>This seems like a weak argument to me. We don't really understand how human intelligence works yet so how can we claim that computers will never realize similar intelligence?
My comment doesn't say that "computers will never realize similar intelligence".
It says that they will never realize it through reasoning - and rule based systems, 1960s-1990s AI style.
Which isn't the way we realize it either, even if we don't fully understand how we do realize it yet.
I'm personally skeptical that we'll see AGI any time soon but I don't think we know enough to say this definitively.