I agree with you @orbital-decay that I also do not get the same vibe reading this thread.
Though, while human intelligence is (seemingly) not magic, it is very far from being understood. The idea that a LLM is comparable to human intelligence implies that we even understand human intelligence well enough to say that.
LLMs are also not understood. I mean we built and trained them. But don't of the abilities at still surprising to researchers. We have yet to map these machines.
I do partially agree. Though, it is at least tractable to understand why a LLM gave a specific output - but perhaps not practical. Understanding how a human arrives at a certain decision (by say simply looking at brain waves) OTOH is not even tractable as of yet.
Though, while human intelligence is (seemingly) not magic, it is very far from being understood. The idea that a LLM is comparable to human intelligence implies that we even understand human intelligence well enough to say that.