Bad actors exist. Hence why with every new technology comes new responsibility to use it properly - from nuclear fission to robotics to neural networks. That doesn't mean we need to reference how WW2 Nazis would use it every time a new technology is developed.
"What would the Nazis do with this?" is not wholly inappropriate. It's basically a way of looking at the worst case failure mode, like asking "what would happen to this nuclear reactor if all cooling and backup systems fail at once?" or "what happens to people in the capsule if the rocket explodes during max-Q?"
In this case it's "what's the danger of all this lazily deployed insecure ubiquitous surveillance gear in a political worst case scenario like a descent into totalitarianism, mafia statism, etc.?" That's not an unlikely thing. Complex societies undergo bouts of collective insanity or descents into pervasive corruption with disturbing regularity on historical time scales.
Personally I think the USA is one 9/11 scale (or worse) terrorist attack or one seriously painful economic crash away from an American Putin or Chavez (or worse). Which we get depends on which side manages to field the most charismatic demagogue. If that happens all this total surveillance stuff will be mobilized against dissenters on an industrial scale and with a significant amount of public support.
You limit things like surveillance to limit moral hazard. Future generations are likely to look back on the wanton deployment of all this stuff and say "what were they thinking!??!?"
>You limit things like surveillance to limit moral hazard.
Not quite sure how you got that out of "with every new technology comes new responsibility". That's neither singling out surveillance nor limiting to moral hazard.