> The active meddling thing, when observed in action, is a source of information.
Not if it's done right. If one person views a page the old-fashioned way, caches the DOM, and circulates it peer-to-peer, then whoever is weaponizing that content only has one browser fingerprint to work with, despite there being potentially thousands of users that they wish they could profile.
That's far less information to work with than the thousands of individual requests they would otherwise have to scrutinize.
The honest/dishonest distinction only comes down to whether you're going to try to protect the volunteer who grabbed the page to begin with, or whether you're going to expose them to retribution.
As for the systems you can't lie to, those you can replace with more trustworthy alternatives. This is a lot of work but it's better than suggesting that your peers be honest in ways that will harm them.
So to answer your question, no. None of the scenarios where you let your adversary know that you're working against them, and also let them know how to find and harm you, are interesting strategies for combatting surveillance.
Surveillance exists in support of targeted coercion. We should not make a target of the more honest among us. We need to protect those people most of all.
You need to imagine a surveillance system that you cannot lie to, and cannot avoid or replace. It will be there, no way of escaping it. Sattelites, network monitoring, doesn't matter. Assume it exists.
Anyone in control of such hypothetical systems can act upon the surveillance information to manipulate a target (not only observe it). This could be done in several ways. LLM bots encouraging you to volunteer information, gaslighting, etc.
The load bearing component of such surveillance systems are _not_ these actors (LLMs, bots, etc). It's _the need for surveillance_.
What encourages a society to produce surveillance in the first place? Catching bad guys, protecting people, etc. I'm not saying that I agree with it, it is just that this is the way it works.
Anyone doing shady things is a reason for surveillance to exist. Lying is one of those things, making LLM bots is one of those things. Therefore, to target the load bearing aspect of surveillance, I need to walk in a straight line (I won't deploy LLM bots, create alt accounts, etc). There should be no reason to surveil me, unless whoever is in control is some kind of dictator or developed some kind of fixation on me (then, it's their problem not mine).
I can do simple things, like watching videos I don't particularly like, or post nonsense creative stories in a blog, or just simple things designed to hide nothing (they're playful, no intent). Why does someone cares about what I post in a blog that no one visits? Why does someone care about the videos I watch? If someone starts to act on those things, it is because I'm being surveiled. They're honeypots for surveillance, there's nothig behind them.
With those, I can more easily see whoever is trying to act upon my public information by marking it. They will believe they're targeting my worldviews or preferences or personality, but they're actually "marked with high-visibility paint". In fact, I leave explicit notes about it, like "do not interact with this stuff". Only automated surveillance systems (unable to reason) or fanatic stalkers (unable to reason) would ignore those clear warnings.
This strategy is, like I mentioned, mostly based on honesty. It targets the load bearing aspects of surveilance (the need for it), by making it useless and unecessary (why are you surveiling me? I see how you are acting upon it).
It's not about making honest people targets, it's about making surveillance useless.
I suppose we are. I generally assume that someone, somewhere, has something to hide: something that benefits me if they're allowed to keep it hidden. History is full of these characters, they keep the establishment in check, prevent it from treating the rest of us too badly.
If the powers that be could know with certainty that all of us planned to behave and would never move against them (or could neutralize those who had been honest about their intent to do so), then I think things would be much worse for the common folk than they are now. It's hard not to see your strategy as a path to that outcome.
Not if it's done right. If one person views a page the old-fashioned way, caches the DOM, and circulates it peer-to-peer, then whoever is weaponizing that content only has one browser fingerprint to work with, despite there being potentially thousands of users that they wish they could profile.
That's far less information to work with than the thousands of individual requests they would otherwise have to scrutinize.
The honest/dishonest distinction only comes down to whether you're going to try to protect the volunteer who grabbed the page to begin with, or whether you're going to expose them to retribution.
As for the systems you can't lie to, those you can replace with more trustworthy alternatives. This is a lot of work but it's better than suggesting that your peers be honest in ways that will harm them.
So to answer your question, no. None of the scenarios where you let your adversary know that you're working against them, and also let them know how to find and harm you, are interesting strategies for combatting surveillance.
Surveillance exists in support of targeted coercion. We should not make a target of the more honest among us. We need to protect those people most of all.