Hacker News new | past | comments | ask | show | jobs | submit login

> is it any less believable than getting general intelligence by training a blob of meat?

Yes, because we understand the rough biological processes that cause this, and they are not remotely similar to this technology. We can also observe it. There is no evidence that current approaches can make LLM's achieve AGI, nor do we even know what processes would cause that.






> because we understand the rough biological processes that cause this

We don't have a rough understanding of the biological processes that cause this, unless you literally mean just the biological process and not how it actual impacts learning/intelligence.

There's no evidence that we (brains) have achieved AGI, unless you tautologically define AGI as our brains.


> We don't have a rough understanding of the biological processes that cause this,

Yes we do. We know how neurons communicate, we know how they are formed, we have great evidence and clues as to how this evolved and how our various neurological symptoms are able to interact with the world. Is it a fully solved problem? no.

> unless you literally mean just the biological process and not how it actual impacts learning/intelligence.

Of course we have some understanding of this as well. There's tremendous bodies of study around this. We know which regions of the brain correlate to reasoning, fear, planning, etc. We know when these regions are damaged or removed what happens, enough to point to a region of the brain and say "HERE." That's far, far beyond what we know about the innards of LLM's.

> here's no evidence that we (brains) have achieved AGI, unless you tautologically define AGI as our brains.

This is extremely circular because the current definition(s) of AGI always define it in terms of human intelligence. Unless you're saying that intelligence comes from somewhere other than our brains.

Anyway, the brain is not like a LLM, in function or form, so this debate is extremely silly to me.


> Yes we do. We know how neurons communicate, we know how they are formed, we have great evidence and clues as to how this evolved and how our various neurological symptoms are able to interact with the world. Is it a fully solved problem? no.

It's not even close to fully solved. We're still figuring out basic things like the purpose of dreams. We don't understand how memories are encoded or even things like how we process basic emotions like happiness. We're way closer to understanding LLMs than we are the brain, and we don't understand LLMs all that well still either. For example, look at the Golden Gate Bridge work for LLMs -- we have no equivalent for brains today. We've done much more advanced introspection work on LLMs in this short amount of time than we've done on the human brain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: