The problem with human intelligence is saying 'if a human can do it, it is intelligence' doesn't break down the problem space correctly. It also leaves potential gaps where a system could have a form of intelligence humans do not leading to humans misjudging that systems capabilities abd that could lead to disaster (common AI risk scenario).