One issue I don't see considered is - how to ensure that explainable artificial intelligence doesn't lie? Right now, it may not be an issue, but as AI systems get complex ("smart") enough, one need to be sure that the introspective output isn't crafted to influence people looking at it.
Right now it looks like it's being used more as a "debugging" output to make more intelligent Al's. Once they can lie, we will have achieved that goal...