Absolutely the AI. At that point in the future I'm presuming that if something breaks it's because an external API or whatever dependency broke, not because the AI code has an inherent bug.
But if it does it could still fix it.
And you won't have to tell it anything, alerts will be sent if a test fails and it will fix it directly.
When your AI-managed codebase breaks, who are you going to ask to fix it? The AI?