The way to stop it from executing an action is probably having controls on the action and an not the llm? white list what api commands it can send so nothing harmful can happen or so on.
This is similar to the halting problem. You can only write an effective policy if you can predict all the side effects and their ramifications.
Of course you could do like deno and other such systems and just deny internet or filesystem access outright, but then you limit the usefulness of the AI system significantly. Tricky problem to be honest.