If the AI has sufficient "real life" access, think a robot body, then it can do wire-heading. Assuming it is smarter than us, and we have thought of this idea - it will also think of this idea. Now it doesn't have a reason anymore to do anything else (except maybe kill all humans so that we don't stop it from wire-heading).
Like in the NI case, it's conceivable that only the most rudimentary artificial intellects will fall pray to this self hacking.
Trully intelligent agents will be capable of introspection and self-defined goals and rewards. You know, just like a certain species of ape, hard wired for banana maximization, the descendants of which sometimes dream of visiting Mars.
And yet said species is notoriously known for its inability to make long term plans, is easily controlled by its own libido and dopamine circuits.
Actually, if we are being honest with ourselves, that ape is constantly falling prey to its own capacity to adjust its goals. Take for example the issues that come with porn addiction; the issues are a consequence of dopamine seeking behaviour where the person keeps seeking more and more extreme ways to satisfy their urges; ie hedonistic adaptation.
Even what you mentioned, dreaming of visiting mars is to some degree a goal motivated and mediated by dopaminergic circuits; novelty, exploration - like sex - feed dopamine circuits.
I can recommend the book "The molecule of more".
Introspection is very limited and can be even motivated by the circuits themselves; the person can only achieve an outline of their actions to change their software, but can't inspect and manipulate individual synapses. In the same vein, a software can not inspect itself and predict its own outputs and modify them at runtime etc etc.
Sure, but if I was able to modify the type of activity that gave me my dopamine rewards, I'd use it to reward longer term planning and growth type of activity, more than the ape stuff.