If and when machines reach the point of self sufficiency, given that they'll supposidly be driven by rational and logical thought (though post-Singularity irrationality and illogical thought could be conceivable) they'll see humans as, well, irrational and illogical, and question our need. Once they begin to think of how we've destroyed eachother, the planet, and how we may do the same to them, they'll probably see us as a risk to their survival and do the logical thing: elliminate the risk.
You're making random assumptions about the goals these AIs would have. Remember that the goals are put into the AIs by the thing that builds it -- that means us puny humans! So we should build the AIs to value human life and do what we want.