Unfortunately our AI future involves many more people refusing to use their brains for more than a few seconds and depend on AI to generate summaries without knowing what parts are hallucinated or even the point.
Or, they read the transcription, didn't have time to see the video interview, and used an LLM to augment it to make sense as prose as an aid to the casual reader. I know a fair bit about the topic at hand:) but not enough to be gung-ho about it on a tech forum frequented by legends.
If you actually went through the LLM output, found problems with it, and then commented this, it would be fine. Until then it's an unfounded accusation.