I don't think there will be a gap between agi and ASI.
The definition of agi keeps shifting - any time an ai can do something, it's just engineering. Current AIs, although narrow, are already superhuman in what they can do. A language AI can converse in more languages than any living human can learn. A chess playing AI can beat any living human. So each time an AI wins on one metric, it's not going to be human level, it'll be superhuman level very quickly.
When an AI finally learns the "only a human can do this" thing, it'll already be superhuman in every other way.
If they were profoundly incompetent at conversation, we wouldn't be worried about weaponization of LLMs to sway public opinion. If the things they write and the images or voices or videos they made were worthless, we wouldn't be worried about how they displace carbon-based artists. Any commercially relevant shortcomings present today will be gone in version n+1 or soon after.
The definition of agi keeps shifting - any time an ai can do something, it's just engineering. Current AIs, although narrow, are already superhuman in what they can do. A language AI can converse in more languages than any living human can learn. A chess playing AI can beat any living human. So each time an AI wins on one metric, it's not going to be human level, it'll be superhuman level very quickly.
When an AI finally learns the "only a human can do this" thing, it'll already be superhuman in every other way.