One thing that I think is lost in a lot of the comments here is that, to a large extent, privacy is experienced, not factual. That is, in many cases, the breach of privacy is the act of mentioning something that should be private, not whether or not the system (or the person) knows that thing. This is something we tend to intuitively understand in our human relationships, but one that somehow seems to be forgotten in the design of these systems (or, at least, the conversations about them). We need good ways to tell the Google Assistant that something is private (or for it to figure it out for itself) -- even if it still possesses the underlying data.
(There are, of course, situations in which the actual existence or not of specific data is what matters, but I think those are less relevant to the success of something like Google Assistant than the perception of privacy -- and that perception is important, regardless of the underlying data.)
(There are, of course, situations in which the actual existence or not of specific data is what matters, but I think those are less relevant to the success of something like Google Assistant than the perception of privacy -- and that perception is important, regardless of the underlying data.)