The way this "game" presents its "moral" without even trying to logically justify it is simply offensive.
The "bad" ending is not in any way connected to the project, with same success it could say you do nothing and in a year an asteroid falls on earth bringing the same nanowhatever.
The "good" ending is the hero saying that he wants to revert effects of "global warming". But if you have ability to build one millionth of Dyson swarm in space, global warming is not a problem, because you can build a planet level swarm to have fine grained control of weather, and revert, say, effects of earlier climate change that had converted Sahara into a desert.
Edgar drops many hints it has a bad will. For example it can predict what parts of bad scenarios he should report to you but he still does the bad things in these scenarios. He can also predict when the human leadership will shut the project down. Which means he can predict what he should be doing in the first place - but he still presents to you misaligned scenarios making you mistakenly believe that your prompt makes a difference.
This dishonesty and manipulation is obvious hint that you can't trust Edgar.
The moral isn't that all AI (or all tech) is bad, it's that alignement is tricky, AI tropes make no sense, and we should be very, very careful.
The story explicitly tells us that the virus was picked up by accident. But even if we discard that part and assume it was done intentionally by Edgar, the story is not logical, because the other choice of reverting climate change gives Edgar just as much opportunity to sabotage. Besides if your prompt does not make any difference, why lack of your prompt would make a difference, or if you don't write the prompt and go to eat ice cream, why do you think someone else would not write a worse prompt?
The flaw in your logic is that you are trying to solve an alignment problem, while the core of the issue is concentration of the power problem.
E.g. now if i had the power i would without remorse or hesitation kill a certain group of 95.12 million people, while 4 years ago i would consider anyone having such thoughts mad. I don't do this not because my values, but only because i don't posses such a power. And AI will be exactly the same.
The very premise of this story is broken, because in real life to build and use such a swarm you would need trillions of people and AI agents, living on several planets and many space stations. And if in the end there is nano-virus outbreak and several billion die (which is less than a percent), that's just what always have happened, and it still would be better than the alternative of many more people dying from lack of energy, or from old age.
The "bad" ending is not in any way connected to the project, with same success it could say you do nothing and in a year an asteroid falls on earth bringing the same nanowhatever.
The "good" ending is the hero saying that he wants to revert effects of "global warming". But if you have ability to build one millionth of Dyson swarm in space, global warming is not a problem, because you can build a planet level swarm to have fine grained control of weather, and revert, say, effects of earlier climate change that had converted Sahara into a desert.