ChatGPT (instruction tuned autoregressive language models) indeed already seems quite general (it's good at conversation Turing tests without faking it like ELIZA), even if the absolute intelligence is limited. Level of generality and intelligence is not the same. Something could be quite narrow but very intelligent (AlphaGo) or quite general but dumb overall (small kid, insect).
Okay, ChatGPT is only text-to-text, but Google & Co are adding more modalities now, including images, audio and robotics. I think one missing step is to fuse training and inference regime into one, just as in animals. That probably requires something else than the usual transformer-based token predictors.
I feel like though I speak the same language as everyone else, at least nominally, none of you are using the same definitions as I do for any of these terms.
AGI was the result of people using the older term "AI" for things that hadn't turned out to be what we thought AI was going to be.
Like alot of technology terms, all of this has its orgins in science fiction, when AI was supposed to be the equivalent of a human mind, but constructed out of something other than meat. The AI would have agency, it would do things... and do them because it wanted to. It would have goals, that it might fail or succeed at. And it would learn... a proper AI might be constructed to know nothing about a particular subject, but it could then go on to learn (on its own without any outside help) all about that topic. Perhaps even to the point of conducting its own original research to learn more. A sufficiently intelligent AI would go on to learn things no human had ever learned, to invent and theorize inventions and theories no human had conceived of.
But then we all realized that intelligence might be severable from those other parts, and we might have an "oracle" that when asked questions could provide sensible answers, but would have no agency. That wouldn't be able to learn in any real way, but since it already knew the sensible answers, that didn't matter.
And at that point, you see AGI start being used. And I assumed it meant "well, that is what we'll call Asimov's robots, or Skynet, or whatever".
Except, here you are again using AGI to mean the dumb oracles that aren't intelligent in any meaningful way.
>> I think one missing step is to fuse training and inference regime into one, just as in animals.
This has always been an important missing piece. Without it ChatGPT is just a natural language interface to the information it was trained on. Still useful but unable to learn (aside from context).
We are not doing the same as ChatGPT. For instance: Because of its training, ChatGPT tries to answer like a human. Humans don't try to answer like a human.
One distinction I would make is that at true AGI should have internet access and be able to query for updated information, instead of being stuck in the time moment it's trained.
Imagine if someone took away your speech, hearing, TV, radio, newspaper, and the ability to order new books - you only had access to the knowledge you already have. You're only allowed to communicate via serial terminal, and can only respond, not initiate.
I'm not sure how anyone could be this naive. Mammal brains don't have this train mode inference mode. They are both running at all times. If what you said was true, if I taught you something today you wouldn't be able to perform that action till tomorrow. Hell, schools would be an insane concept if this were true. Try to think a bit more before confidently stating an answer.
Sleep could be for long term memory, but clearly not everything else is "context" (short term memory). Maybe you learn something in the morning which requires you to remember it for >12 hours before you go to bed.
Okay, ChatGPT is only text-to-text, but Google & Co are adding more modalities now, including images, audio and robotics. I think one missing step is to fuse training and inference regime into one, just as in animals. That probably requires something else than the usual transformer-based token predictors.