Ironically, this is essentially the core danger of true AGI itself. An agent can't achieve goals if it's dead, so you have to focus some energy on staying alive. But also, an agent can achieve more goals if it's more powerful, so you should devote some energy to gaining power if you really care about your goals...
Among many other more technical reasons, this is a great demonstration of why AI "alignment" as it is often called is such a terrifying unsolved problem. Human alignment isn't even close to being solved. Hoping that a more intelligent being will also happen to want to and know how to make everyone happy is the equivalent of hiding under the covers from a monster. (The difference being that some of the smartest people on the planet are in furious competition to breed the most dangerous monsters in your closet.)
Among many other more technical reasons, this is a great demonstration of why AI "alignment" as it is often called is such a terrifying unsolved problem. Human alignment isn't even close to being solved. Hoping that a more intelligent being will also happen to want to and know how to make everyone happy is the equivalent of hiding under the covers from a monster. (The difference being that some of the smartest people on the planet are in furious competition to breed the most dangerous monsters in your closet.)