I don't understand why you'd be so dismissive about this. It's looking less likely that it'll end up happening, but is it any less believable than getting general intelligence by training a blob of meat?
> is it any less believable than getting general intelligence by training a blob of meat?
Yes, because we understand the rough biological processes that cause this, and they are not remotely similar to this technology. We can also observe it. There is no evidence that current approaches can make LLM's achieve AGI, nor do we even know what processes would cause that.
> because we understand the rough biological processes that cause this
We don't have a rough understanding of the biological processes that cause this, unless you literally mean just the biological process and not how it actual impacts learning/intelligence.
There's no evidence that we (brains) have achieved AGI, unless you tautologically define AGI as our brains.
> We don't have a rough understanding of the biological processes that cause this,
Yes we do. We know how neurons communicate, we know how they are formed, we have great evidence and clues as to how this evolved and how our various neurological symptoms are able to interact with the world. Is it a fully solved problem? no.
> unless you literally mean just the biological process and not how it actual impacts learning/intelligence.
Of course we have some understanding of this as well. There's tremendous bodies of study around this. We know which regions of the brain correlate to reasoning, fear, planning, etc. We know when these regions are damaged or removed what happens, enough to point to a region of the brain and say "HERE." That's far, far beyond what we know about the innards of LLM's.
> here's no evidence that we (brains) have achieved AGI, unless you tautologically define AGI as our brains.
This is extremely circular because the current definition(s) of AGI always define it in terms of human intelligence. Unless you're saying that intelligence comes from somewhere other than our brains.
Anyway, the brain is not like a LLM, in function or form, so this debate is extremely silly to me.
> Yes we do. We know how neurons communicate, we know how they are formed, we have great evidence and clues as to how this evolved and how our various neurological symptoms are able to interact with the world. Is it a fully solved problem? no.
It's not even close to fully solved. We're still figuring out basic things like the purpose of dreams. We don't understand how memories are encoded or even things like how we process basic emotions like happiness. We're way closer to understanding LLMs than we are the brain, and we don't understand LLMs all that well still either. For example, look at the Golden Gate Bridge work for LLMs -- we have no equivalent for brains today. We've done much more advanced introspection work on LLMs in this short amount of time than we've done on the human brain.
Also it took hundreds of millions of years to get here. We're basically living in an atomic sliver on the fabric of history. Expecting AGI with 5 of years of scraping at most 30 years of online data and the minuscule fraction of what has been written over the past couple of thousand years was always a pie-in-the-sky dream to raise obscene amounts of money.
I feel like accusing people of being "so dismissive" was strongly associated with NFTs and cryptocurrency a few years ago, and now it's widely deployed against anyone skeptical of very expensive, not very good word generators.
I'm not sure what point you're making. It's true that people, including myself, were dismissive of cryptocurrency a few years ago; I think it's clear at this point that we were wrong, and it's not actually the case that the industry is a Ponzi scheme propped up by scammers like FTX.