An interesting aspect of OpenAI's agreement with Microsft is that, until the point of AGI, Microsoft have IP rights to the tech. I'm not sure exactly what's included in that agreement (model, weights, training data, dev tools?), but it's enough that Nadella at least made brave sounding statements during OpenAI's near implosion that "they had everything" and would not be disrupted if OpenAI were to disappear overnight. I would guess they might have a major disruption in continuing development, but I guess at least the right to carry on using what they've already got access to.
The interesting part of this is that whatever rights Microsoft has do not extend to any OpenAI model/software that is deemed to be AGI, and it seems they must therefore have agreed how this would be determined, which would be interesting to know!
There was a recent interview of Shane Legg (DeepMind co-founder) by Dwarkesh Patel where he gave his own very common sense definition of AGI as being specifically human-level AI, with the emphasis on general. His test for AGI would be to have a diverse suite of human level cognitive tasks (covering the spectrum of human ability), with any system that could pass these tests then being subject to ad hoc additional testing. Any system that not only passed the test suite but also performed at human level on any further challenge tasks might then reasonably be considered to have achieved AGI (per this definition).
The interesting part of this is that whatever rights Microsoft has do not extend to any OpenAI model/software that is deemed to be AGI, and it seems they must therefore have agreed how this would be determined, which would be interesting to know!
There was a recent interview of Shane Legg (DeepMind co-founder) by Dwarkesh Patel where he gave his own very common sense definition of AGI as being specifically human-level AI, with the emphasis on general. His test for AGI would be to have a diverse suite of human level cognitive tasks (covering the spectrum of human ability), with any system that could pass these tests then being subject to ad hoc additional testing. Any system that not only passed the test suite but also performed at human level on any further challenge tasks might then reasonably be considered to have achieved AGI (per this definition).