We really have no idea how to directly compare the two.
Also, vast portions of the human brain are dedicated to the visual cortex, smelling, breathing, muscle control... things which have value to us but which don't contribute to knowledge work when evaluating how many parameters it would take to replace human knowledge work.
While those portions of the brain aren't specific to learning intellectual or academic information, they might be crucial to making sense of data, help in testing what we learn, and help bridge countless gaps between model/simulation and reality (whatever that is). Hopefully that makes sense. Sort of like... Holistic learning.
I wonder if our brains and bodies are not all that separate, and the intangible features of that unity might be very difficult to quantify and replicate in silica.
We can say that such and such part of the brain is "for" this or that. Then it releases neurotransmitters or changes the level of hormones in your body which in turn have cascading effects, and at this point information theory would like to have a word.
"If our small minds, for some convenience, divide this glass of wine, this universe, into parts -- physics, biology, geology, astronomy, psychology, and so on -- remember that nature does not know it!" -Richard Feynmann
I think it's even more interesting that the required amount of energy to do that high computational work isn't that high. Evolution has been working on it for a long time, and some things are really inefficient but overall it does an OK job at making squishy machines.
I had a good chuckle at "squishy machines". That's a really interesting way to think about it. It makes me wonder if, some day, we will be able to build "squishy machines" of our own, capable of outperforming silicon while using a tiny fraction of the energy.
We have no idea how to estimate the computational capacity of the brain at the moment. We can make silly estimates like saying that 1 human neuron is equivalent to something in an artificial network. But this is definitely wrong, biological neurons are far more complex than this.
The big problem is that we don't understand the locus of computation in the brain. What is the thing performing the meaningful unit of computation in a neuron? And what is a neuron really equivalent to?
The ranges are massive.
Some people say that computation is some high level property of the neuron as a whole, so they think each neuron is equivalent to just a few logic gates. These people would say that the brain has a capacity of about 1 petaFLOP/s. https://lips.cs.princeton.edu/what-is-the-computational-capa...
Then there are people who think every Na, K, and Ca ion channel performs meaningful computation. They would say the brain has a capacity of 1 zettaFLOP/s. https://arxiv.org/pdf/2009.10615.pdf
Then there are computational researchers who just want to approximate what a neuron does. Their results say that neurons are more like whole 4-8 layer artificial networks. This would place the brain well somewhere in the yottaFLOP/s range https://www.quantamagazine.org/how-computationally-complex-i...
And we're learning more about how complex neurons are all the time. No one thinks the picture above is accurate in any way.
Then there are the extremists who think that there is something non-classical about our brains. That neurons individually or areas of the brain as a whole exploit some form of quantum computation. If they're right, we're not even remotely on the trajectory to matching brains, and very likely nothing we're doing today will ever pay off in that sense. Almost no one believes them.
Let's say the brain is in the zettaFLOP/s range. That's 10^21 FLOP/s. Training GPT-3 took 10^23 FLOPS total over 34 days. 34 days has 2937600 seconds. 10^23/10^7 is about 10^16 FLOP/s. So by this back of the envelope computation the brain has about 4 orders of magnitude more capacity, or 1000x. This makes a lot of sense, they're using a pettaFLOP/s supercomputer basically which we already knew. We'll have zettaFLOP/s supercomputers soon, yottaFLOP/s, people are worried we're going to hit some fundamental physical limits before we get there.
All of this is a simplification and there are problems with every one of these estimates.
But, in some sense none of it means anything at all. You can have an extremely efficient algorithm that runs 1 million times faster than an extremely inefficient algorithm. Machines and brains do not run the same "software", the same algorithms. So comparing their hardware directly doesn't say anything at all.
This is an important point. On the one hand real neurons are a heck of a lot more complex than a single weight in a neural network, so exactly mimicking a human brain is still well outside our capabilities, even assuming we knew enough to build an accurate simulation of one. On the other hand, there's no intrinsic reason why you would need to in order to get similar capabilities on a lot of areas: especially when you consider that neurons have a very 'noisy' operation environment, it's very possible that there's a huge overhead to the work they do to make up for it.
Since it is already giving interesting results, let's say a "brain" is a connectome with its current information flow.
Comparing AI with a brain in terms of scale is somewhat hazardous but with what we know about real neurons and synapses, one brain is still several orders of magnitude above the current biggest AIs (not to mention, current AI is 2D and very local, as the brain is 3D and much less locality constrained).
The "self-awareness" zone would need a at leas 1000x bigger, redondant, with a saveable flow of information, 3D with less locality, connectome that of the current biggest AI. Not to mention, realtime rich inputs/outputs, and years of training (like a baby human).
Ofc, this is beyond us, we have no idea of what's going on, and we probably won't. This is totally unpredictable, anybody saying otherwise is either trying to steal money for some BS AI research, or a the real genius.
Now the question I have is how small a model could be that can fascinate a big population of smarter creatures. On one hand we know that the human brain has a lot of power but in the other it could be "manipulated" by lesser intelligente creatures.
How many? Based on current human neurons/synapses knowledge?