Not at all, I don’t think humans are magic at all.
But I don’t think even the ‘thinking’ LLMs are doing true thinking.
It’s like calling pressing the autocomplete buttons on your iPhone ‘writing’. Yeah kinda. It mostly forms sentences. But it’s not writing just because it follows the basic form of a sentence.
And an LLM, though now very good at writing is just creating a very good impression of thinking. When you really examine what it’s outputting it’s hard to call it true thinking.
How often does your LLM take a step back and see more of the subject than you prompted it to? How often does it have an epiphany that no human has ever had?
That’s what real thinking looks like - most humans don’t do tonnes of it most of the time either - but we can do it when required.