Microsoft Copilot falls down completely when it comes to citing things: even when the answer is correct, the cited documents frequently don't say what it says they do.
If GenAI search results or summarizations can't be verified by citing their essential sources, how can they ever be deemed trustworthy and free of legal liability for fabricating falsehoods? Somehow this Achilles heel must be eliminated or they won't be usable in a great many domains where validation is required (like medical diagnosis, citing legal evidence, investment advice, etc).
In fact, it's hard to imagine all that many uses for GenAI that won't _eventually_ require some sort of calibratable measure for accuracy, and the capacity for validation thereof, before they can be widely adopted.
The only time I've seen extensive citing from an AI is when I use Deep Research with the ChatGPT Pro plan. Otherwise, they are mostly few and far between. Perplexity was one of the early services I used that seemed to try to make an effort to get this right, but I haven't used it in quite some time.
I was going to say Perplexity usually seems good but reading the article it was interesting they are being naughty and using forbidden sources while pretending not to.
This is a feature enabling our content derived species to enter the age of full spectrum story driven history fiction as proposed by parkerblubtan at the third matzmum tecdot convention in 2017