All that can really be conveyed is the "truth" that the model produced the output in response to the input. Given the other vulnerabilities of neural networks (biases, opaqueness, etc.) this is a bit like worrying about a MITM attack when communicating with a sock puppet.