Yes, exactly. An observation of OpenAI's behavior gives many clues that suggest they know we've hit the plateau and have known for some time.
A huge one for me is that Altman cries "safety" while pushing out everyone who actually cares about safety. Why? He desperately wants governments to build them a moat, yesterday if possible. He's not worried about the risks of AGI, he's afraid his company won't get there first because they're not making progress any more. They're rushing to productize what they have because they lost their only competitive advantage (model quality) and don't see a path towards getting it back.
A huge one for me is that Altman cries "safety" while pushing out everyone who actually cares about safety. Why? He desperately wants governments to build them a moat, yesterday if possible. He's not worried about the risks of AGI, he's afraid his company won't get there first because they're not making progress any more. They're rushing to productize what they have because they lost their only competitive advantage (model quality) and don't see a path towards getting it back.