Unsafe AI might compromise cybersecurity, or cause economic harm by exploiting markets as agents, or personally exploit people, etc. Honestly none of the harm seems worse than the incredible benefits. I trust humanity can reign it back if we need to. We are very far from AI being so powerful that it cannot be recovered from safely.
It’s not AGI. But I’m not convinced we need a single model that can reason to make super powerful general purpose AI. If you can have a model detect where it can’t reason and pass off tasks appropriately to better methods or domain specific models you can get very powerful results. OpenAI already on the path to doing this with GPT