Possible. AGI/ASI are poorly defined. I tend to think we're already at AGI, obviously many disagree.
> For example, human intelligence (the closest thing we know to AGI) requires extremely complex sensory and internal feedback loops and continuous processing unlike autoregressive models' discrete processing.
I've done a fair bit of connectomics research and I think that this framing elides the ways in which neural networks and biological networks are actually quite similar. For example, in mice olfactory systems there is something akin to a 'feature vector' that appears based on which neurons light up. Specific sets of neurons lighting up means 'chocolate' or 'lemon' or whatever. More generally, it seems like neuronal representations are somewhat similar to embedding representations, and you could imagine constructing an embedding space based on what neurons light up where. Everything on top of the embeddings is 'just' processing.
I believe we already have the technology required for AGI. It perhaps is analogous to a lunar manned station or a 2 mile tall skyscrapper. We have the technology required to build it, but we don't for various reasons.
> For example, human intelligence (the closest thing we know to AGI) requires extremely complex sensory and internal feedback loops and continuous processing unlike autoregressive models' discrete processing.
I've done a fair bit of connectomics research and I think that this framing elides the ways in which neural networks and biological networks are actually quite similar. For example, in mice olfactory systems there is something akin to a 'feature vector' that appears based on which neurons light up. Specific sets of neurons lighting up means 'chocolate' or 'lemon' or whatever. More generally, it seems like neuronal representations are somewhat similar to embedding representations, and you could imagine constructing an embedding space based on what neurons light up where. Everything on top of the embeddings is 'just' processing.