>> Since when do we have this right? [...] but when did we all collectively make that decision?
December 15th, 1791. As explained by SCOTUS as recently as 1995 (McIntyre v. Ohio Elections Commission) anonymous speech is baked into the first amendment, putting it at the core of american freedoms:
"Anonymity is a shield from the tyranny of the majority. . . . It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society." (Cut from EFF.)
Agreed. Read any periodical from the 1600s onwards and you’ll see a lot of pen names. And to people who think discourse was more civil back then (sort of assuming that people were more responsible when being anonyminity)— they’re wrong. some of the most vile filth, slander and gossip was published.
There was a particular instance where Thomas Jefferson called John Adams a hermaphrodite (via a supporter with a newspaper) [0]. Adams apparently responded by spreading a rumor that Jefferson was dead. I believe that technique was learned from Benjamin Franklin, who "predicted" the death of a rival almanac publisher. [1]
There's much more in the way of antics among the founders of the US, but this isn't unique to them. Classical antiquity even has stories of some of the childish stuff that politicians got up to.
EDIT to add example: IIRC Cicero accused Clodius of incest and a number of other things [2]. Clodius responded by accusing Cicero of acting as a tyrannical king while he was consul. Some of the insults don't hit the same today, but this is supposed to be seriously petty stuff.
The Cicero example wasn't anonymous, of course, but then again anonymity in classical antiquity would have been a good way to make sure your story didn't survive to the modern day. Having said that, there are stories about some of the stuff that was written in graffiti in Rome.
I'm not sure if I'd call any of this stuff "worse" or not, though there was more political violence in the ancient world. I guess it depends on what you include under the term "discourse".
For what it's worth, the origins of our word "invective" apparently come from Latin [3][4]. Apropos if true.
Pen names are known to the publisher, though. They are pseudonyme s not complete anonymity.
The current full anonymity of targeted taylored communication allows for unprecedented levels of deceit, systematic and controlled disinformation of the electorate, in unprecedented ways, leading to e.g. Brexit.
This current systematic abuse of anonymity to deceive the people is not comparable to the pseudonyme books and articles and leaflets of the pre-facebook era.
If people can't tell truth from lies, why even bother having elections?
You mention Brexit. If it was a foregone conclusion that people weren't sharp enough to vote either way, why not just ask Queen Elizabeth whether they should stay or leave?
> why not just ask Queen Elizabeth whether they should stay or leave?
There is an alternative, of course. The country could be mapped out as a few hundred distinct contiguous regions, and the people who live in each region could vote for one of their fellow inhabitants to represent them.
So, rather than each individual having to weigh the complex arguments for and against Brexit, they could make a much more "human" decision of deciding whether a candidate seems honest and competent. To aid them in that decision, they can look at the candidate's past record, and the potential endorsement they may have from a political party (which itself will have a record of good or bad policies).
Then the winning candidates (hopefully under some sensible voting system) can all get together and debate the issue, and commission reports from experts, then vote on the big question themselves. Of course there's still no guarantee that the right decision would be made, but, for the record, in the 2019 general election, 52% of the popular vote went to parties that were in favour of a second referendum (which could have had a better set of options on it).
The fallacy here is that districted voting ends up with two parties. Extreme ones with fptp, mode centrist ones with, say, approval voting.
So first you want some proportional representation.
The second fallacy is that of the informed voter or the informed representative. The core issue we debated in this thread was about that second fallacy. The issue is that "information" is heavily manipulated in a world where anyone can anonymously claim anything. And how do you address that with representative democracy?
> So first you want some proportional representation.
Well, I did say "some sensible voting system", and I think such a system can still be district-based, either using some top-up seats (like in MMP) or (more controversially) assigning different weights to the votes of each representative based on what share their party got of the popular vote.[0]
> The second fallacy is that of ... the informed representative.
If someone has the full time job and the skills (as judged by the public) and the resources to seek out a broad range of opinions (from constituents, other representatives, academics, activists, and civil servants), then they are surely more able than the average citizen (or average dictator) to make a well-informed decision.
We're not expecting perfection, here, just an insulating layer between the manipulated information of the media (including social media) and the decision-making process.
Are they, now? If a politician needs $X,XXX,XXX in bribes to even get out of bed, does that make him better or worse at making decisions than the average man on the street?
I guess the question is, how many voters can you persuade with $X,XXX,XXX in political advertising? But, I suppose, if the politician is being bribed with the offer of having $X,XXX,XXX spent on advertising for their campaign, perhaps there's not much difference.
If disinformation is able to sway public opinion, and people are unable to distinguish it from the truth, which you clearly believe, then how do you know that your opinions are not also based on disinformation?
Ill certainly look further on the comment of the pennames, because it is absolutely true, mobs and lynching used to be far more common then, and pen names were a safe way to avoid getting "on the wrong side of the mob"
It is probably exactly because this contrast (individual vs furious mob) is less explicit today that anonymity is itself then seen as a problem
Imo, the basic problem with how the internet has played out so far is that what began as a sparse network that required co-operation via some mutual dependency on things like a base level of competence and intelligence, collegiality, respect for certain individual boundaries like pseudonymity, and other conventions has, by making it user friendly, become a battle ground for people who are incapable of those necessary things. These are just eternal september problems.
However, the necessary condition for all these kinds of total social surveillance controls is a network of authorities who have physical impunity from the people they ostensiby serve, govern, or rule, and I'd bet this decade will be defined by resolving that question of how far are they willing to go. We're already post-truth in public and political discourse, so there is no basis for mutual trust, and the real world conventions about basic freedoms have been largely recast as privileges, as though they are both somehow granted by critics and subject to their political schemes. Some very clever people are still trying to find alternatives to a civil confrontation that seems both catastrophic and inevitable, and they're doing it in the form of growing new platforms, cryptocurrencies, privacy and security technologies, energy storage and efficiency techs, even vulnerability research as a hedge, but what I can only now call the leviathanists are co-ordinating to bar any and all exits for people, and in particular minds, that would generate any value they cannot subdue and spend themselves.
Indeed, meaningful anonymity is just probably just about done online, but what I don't think authorities understand is that they think people misbehave when they are anonymous. The trouble is, what they may be about to discover is that it was the anonymity that was the civility, and what is left without it is something altogether more serious.
Large-scale social media platforms are far from "user friendly"; they're advertiser-friendly and influence-peddler-friendly. Big difference. True user friendliness is now to be sought in federated platforms (where you can pick a 'server' that follows your preferred policies) and niche-focused spaces like HN itself, where standards of competence (however defined), collegiality and respect can still be meaningfully enforced.
As every aspect of the world that humans interact with, the internet too is shaped by all interacting people collectively.
Some sites that are heavily used by people have grown away from their original purpose toward simply generation revenue for shareholders. Google was once a handy search tool, not a data collecting/selling platform.
I rarely use Google anymore and many other people too are fed up with services based upon shady business models.
Facebook is seeing a decline in its number of users .. such exoduses have all the power that's needed to transform even the biggest companies.
The internet is full of striving communities of those who turned their backs on shady business models.
I believe that this striving portion of the internet will always exist.
Only its size will simply depend upon how many people decide to stop feeding the hoarders their data.
I've noticed this as well. People fight like animals when their reputation is at stake. The way that the internet lures people into making public, attributable statements means they can't back down in any disagreement without serious loss of face.
> ... I don't think authorities understand is that they think people misbehave when they are anonymous. The trouble is, what they may be about to discover is that it was the anonymity that was the civility, and what is left without it is something altogether more serious.
Considering the scale of funds thrown at think tanks post WWII to brainstorm precisely these types of developments, I'd question that assumption. What is being normalized is precisely the mechanism that will tamper any such outcome.
> We could even end up living in a world where a kind of “e-passport”, crypto-signed government ID is attached to your every internet connection, and tracked everywhere online. […] The rise of bots could render many online communities simply uninhabitable.
> We are moving towards a model where the internet is dominated by a few centralized content providers and their walled gardens, and generated content may unfortunately make it even harder for grassroots online communities to survive and grow.
The author makes a good and fair analysis of the situation, yet I don't see how advocating for e-IDs is going to make the Internet a better place. This is a very ‘platform-focused’ critique that assume that the Internet tends toward more centralisation, where communities are packed together in massive platforms that regulate who communicates to whom and how.
Where do we see a need to regulate ‘fake news’ and artificially generated content? Facebook Pages? Twitter? Youtube? These are all hyper-global platforms focused on content monetization. On the other hand, local Facebook groups and the feed of your friends, Github repositories, Mastodon… may not face the same future. Maybe there's a lesson to learn here?
OP here. Surprised to see an old blog post resurface like this.
Just to be clear, I am not in favor of a centralized e-ID type of system. I think that would be a very ugly world to live in. I don't think that an e-ID is a good thing. This is why I talk about a web of trust at the end of the post. We need a better solution that doesn't rely on centralized control.
> On the other hand, local Facebook groups and the feed of your friends, Github repositories, Mastodon… may not face the same future. Maybe there's a lesson to learn here?
The way I see it, right now, bots are just not capable enough and it's not cost effective enough to try to gain access to these smaller communities. Facebook is a prime target because a single account gains you access to billions of users. That makes it worthwhile to farm out account creation to actual humans.
I do think smaller and more niche platforms will probably always be less subject to influence, but, there's money and power to be gained by influencing public opinion on any platform. So it's just a matter of time. Eventually there will be bots that are smart enough to create an account on any platform without human supervision and that are much less heavy handed and obvious. These bots will be able to auto-generate profile pictures and play more of a "long game" in terms of gaining your trust.
The window if the internet is that has allowed many people, whos voice wouldn't have had any power, to find likeinded people.
Does it have it's drawbacks? Fuck yeah, look at all the damned white power/ neo Nazis!
But by having a minor voice has allowed new cultures to develop. Hell have a look at r/workreform where the smallest of the small are able to talk about their issues.... Without being fired.
Yeah bots will always be there (now), but IMHO the situation for communicating is becoming better.
It's FUD. So long as applications in any form can connect to an internet, people can route around The Internet. Gopher as an example shows that alternative protocols can be created outside the modern approach. Mesh networking as well.
I find the sentence, "right to remain anonymous online," intriguing. Since when do we have this right? Where is this right written, and who decided we had it? I wouldnt be at all surprised if a poll of the HN community found that we all thought that we had (or should have) this right, but when did we all collectively make that decision? Since some time around 2014 I've been advocating in private around the issue of automated inanity engines and their rise to prominence. IMO online anonymity does more harm then good right now, and we still havent seen the full power of things like GPT-3 in the hands of nation state actors (or maybe we have and just dont know it).
Totally agree with your sarcastic comment, but you shouldn't fall into the fallacy of condemning his ideas just because he doesn't follow through with them. let's answer critically.
Do we a have such a right? Some comments in this thread answered it already.
Does anonymity do more harm? How did they measure that? What are the metrics and the data. Or what is the hunch leading them to this conclusion?
>Does anonymity do more harm? How did they measure that? What are the metrics and the data
Not everything has to be a metrics based, data collected thing, but if you insist. Lets do a thought experiment.
I'm an individual with a radical hot take on something
I want to exercise it in the market place of ideas... But then you've got that thing that led to Larry getting his face pulped by those guys last week...
Lets do a count.
Larry had a hot take last week:
Larry exercised it. +1
Beaten to a pulp. -1
It got out there, the the marketplace was enriched, Larry's face wasn't.
As a person with a brain, and knowing I have my name attached to my hot take, I am not at all incentivized to contribute to the marketplace of ideas. -1
If it was a really disgusting hot take, then I just go on with that hidden and never have it challenged or the edges filed off. Incalculable loss
+1-1-1=-1 in the short term.
Long term losses are incalculable.
Throw anonymity into the mix.
Anon X has this hot take.+1
Nother Anon responds: That's daft and here's why... +1
Nother Anon pulls an ad hominem adding nothing to the conversation +0
Another Anon goes "Hey, I see your hot take, but what if..." +1
Up to +3 already.
Now you have a plurality of viewpoints coming together in a mixing pot, some good, some bad, lots of garbage, but dome diamonds in the rough as well. And no one gets their face pulped!
And even in the case of garbage getting challenged, you can't fight disinfo, or horrid viewpoints without challenging them. They'll never get challenged if no one is willing to air that crap. Every now and again, someone comes around from an absolutely terrible way of looking at things. All it takes is time.
All things being equal, take the route of least perverse incentive. Anonymity wins just about everytime.
People underestimate the value of knowing the level of horrible people one is surrounded by. Seriously.
> Not everything has to be a metrics based, data collected thing
Very true, that's why my very next sentence prompted for a hunch, or some sort of reasoning.
Nevertheless, when data are absent, you need theories, and then you try to apply those theories to reallity, see if they actually apply, fix wrong aspects of the theory and reitterate.
Here is my theory:
Anonymity is vital in a society where different classes of people have conflicting vital interests, such as our society. It is a tool to allow you to criticise without being beaten for it.
Backing this up with a thought: Would you beat up a guy who went out publicly and said "We should all plant red roses, red roses are the best!" when you fancy yellow roses? No, noone gives an F for that. People care when these opinions affect their lives and well being, and crucially, they care when they have something to lose from what the other person proposes. That is the true incentive to retaliate. That's where the "conflicting interests" condition is coming from in the theory I proposed.
Now, did we not have conflicting interests, anonymity should still be nice to have, but not as important, because you wouldn't have people hunting your head for lack of it.
if things are unregulated then online anonymity comes naturally. on the other hand, it is anti-anonymity that is unnatural because it needs to be enforced. so no one needed to decide we have the right to anonymity, but "someone" is trying to decide that we don't have that right
> if things are unregulated then online anonymity comes naturally.
I think that, unfortunately, this statement is a bit too strong. An unregulated world doesn't stop private actors from pseudonymously using resources and lying about themselves, of course, but it also doesn't stop owners of network fabric or service providers comparing log data to de-pseudonymize a user.
In the absence of regulation, the predominant factor is just what people want to do.
my point is that option of anonymity comes naturally, as opposed to someone needing to grant you that right. privacy is also a natural right, yet nothing stops anyone from putting an eat to your bedroom door, so to speak
My guess is that the "right to remain anonymous online" is probably an overstatement of what most people believe, and the author was using a convenient, poorly-chosen phrase. I don't think he'd expect that you have the right to (for example) purchase goods online without providing personal information, or file your taxes online without giving your name, or register to vote in elections without verifying your identity: those are unlikely, if not absurd expectations. What most people mean when they say "remain anonymous" is something more like: if your identity is not strictly needed to provide some service, you should have the choice of whether to voluntarily provide it or not.
By this definition, "online anonymity" is something more like privacy: the right to act without unsanctioned intrusion. I know that anonymity and privacy are two different things, but I think they're often used to mean the same thing, and are in this example.
Anyway, to answer your explicit question, article 12 of the United Nations' list of basic human rights[1] lists privacy as a right, and they say that protecting people against unlawful and unnecessary surveillance is part of providing those human rights on the internet[2].
It is a fundamentally difficult question to answer, because who "we" are is hard to define on the internet. We are all part of different jurisdictions, with different laws. The U.N. is the best authority I can think of, though it's not got a lot of teeth.
A letter is anonymous. A phone call is anonymous. A quiet conversation can be anonymous. These are all established accepted means of communication. If we had the economic means to read every physical letter and overhear everyone all the time to prevent criminal activity, should we?
It's a right insofar that comprehensive identity data is additional meta that is attached to the message. But all that is truly needed is the message itself; we don't even need a single field for username. The internet is anonymous by nature, it is only through our artifice that not-anonymous messages are even possible. This is what the incumbents experienced long before the normies took over and IMO how it should remain
I guess the same question could be asked in the opposite direction, so to speak - isn't a belief that such a right exists enough for it to exist, even if it's unwritten?
One of the singular greatest and most important aspects of online communities is the right to anonymity.
If every poster were required to have a government ID attached to their online presence I worry that many of the people who need access to online communities the most would be unable to participate in them. I don't know what the solution would be of course, but I'm tempted to pitch the idea of a "trust worthiness" score attached to users that functions somewhat like karma. Of course currently there's no practical and certainly no bias-free way to evaluate the massive amount of information being posted daily. There's also no way to guarantee that the majority of people would trust such a system nor that they are willing to verify any of the information they consume and readily spread.
Not to mention, how easy it would be for the government or its employees to destroy lives, by planting false evidence. If you have access to the system you would be able to destroy lives with just a few keystrokes.
That is certainly a chilling thought and definitely not an unlikely turn of events. Society seems to need some level of governance in order to function, but who can we trust to govern for the people and not turn to tyranny? Technology is such a beacon of light for advancement but also so dangerous in the wrong hands.
Platforms can do anything they want and most already do require real name verification. But platforms are only the tiny, centralized, commercial part of online. There are vast expanses of niche communities and individual's webservers out there that will never require anything like that.
I think that people have pretty good BS detectors built in. If online communities start to get a funny smell they'll just double down on their small trusted in-person networks. We're already seeing this in smaller WhatsApp, Discord, Slack, Group Text communities. Anyone who thinks that a Twitter or Facebook thread is "real talk" is just fooling themselves. The promise of the internet to break down barriers has backfired and the social walls have just just gotten higher.
On all these sites, including this one, why is the engagement model to comment on things? At family gatherings, prior to COVID, I always noticed real life chit chat to be bizarrely opinionated and biased as soon as it veered from the weather or traffic. What is the value we are getting from commenting, that makes it “worth it” to us to give up anonymity?
Does the HN hive mind have pointers to some kind of trusted pseudonymity infrastructure?
Something along the lines of a network of pseudonym issuing authorities, which allow two things:
Individuals can generate pseudonyms and use them to act online to their heart's content.
However some kind of nuanced "reputation" is propagated back through the network and is "sticky" to the entity behind their pseudonyms.
As reputation is relative, the evaluation needs to be relative to the crowd you interact with. And the big big challenge is to protect the real life individual, so the pseudonym can only ever be uncloaked by the individual themselves.
Is that possible at all? Where are the communities researching about this?
keybase seemed so incredibly promising to me with some really cool features.
the company was bought by zoom though, right? and the team tasked with fixing the encryption of zoom calls. Does anyone know what is expected to happen to the keybase software and infrastructure?
After reading some of the all-bot https://www.reddit.com/r/SubSimulatorGPT2/ that OP links to... many of the comments right here in this HN thread are starting to look like bots to me too. Have people actually sic'd bots on this thread, or am I just over-triggering now?
Crap, now my own comment is starting to look like a bot to me.
This may be a testament to the very low bar set by internet forum/social media discussion, maybe we humans all write uninspired generic word salad comments all the time.
> crypto-signed government ID is attached to your every internet connection
No we need crypto-signed anonymous identifiers that establish a reputation over time. Weakness of online reputation is the problem, not the bots or spam.
> "The rise of bots could render many online communities simply uninhabitable."
Is that uninhabitable in the sense "nobody goes there anymore, it's too crowded"? Why would people make bots to post online comment threads, if not to attract real people? And why would real people go there if it wasn't interesting or enjoyable or clickbait rage-engaging?
And why would Governments be interested in stopping you engaging with bots, anymore than they stop you engaging with fiction books or fiction TV or playing video games against computer characters, etc?
> And why would Governments be interested in stopping you engaging with bots, anymore than they stop you engaging with fiction books or fiction TV or playing video games against computer characters, etc?
Because they're afraid that foreign actors (or other bad actors) will try to systematically influence online discourse to manipulate public perception. That could mean affecting the result of elections or just fuelling the fire when it comes to political disagreements.
If they do come for online anonymity, they'll use the same old rhetoric that they're doing this to protect you, to protect the children, etc.
The author says fake content is going to be too hard to identify so there is only one effective way to stop it, and that is to verify that everyone who posts content is in fact human. And that least to the end of anonymity.
" Ultimately, one possibility is that online platforms will begin requiring a verified government ID in order to register. We could even end up living in a world where a kind of “e-passport”, crypto-signed government ID is attached to your every internet connection, and tracked everywhere online."
> The author says fake content is going to be too hard to identify so there is only one effective way to stop it
I strongly disagree with the author that there's only one effective way to stop misinformation.
Remember, the real problem isn't the existence or proliferation of misinformation, which is both incredibly subjective and impossible to prevent without a central source of truth which is fraught with ethical problems.
The problem is that many people uncritically believe anything they read, see, or hear.
You could solve the actual problem of misinformation tomorrow if there was a concerted societal messaging that said something like the following: "Everything you read online, from any source, may be inaccurate. People with money and power try and manipulate you for even more money and power. Verify everything with multiple sources, consider all possibilities and peoples' motives, and use your brain."
This will never happen because powerful people want to have their cake and eat it too: politicians want you to believe all of the words that come out of their mouth, but not trust the naughty "conspiracy theories" about them being insider traders, for instance.
I think everybody already knows that you shouldn't trust anonymous, un-sourced information you read online, and I doubt simply reminding them of this would do much good. It's just really hard to constantly be vigilant. Even on this site, I've caught myself casually reading an article or comment and taking the claims at face-value, and then reading a response that actually does cite reputable sources showing it was complete bull-shit.
> everybody already knows that you shouldn't trust anonymous, un-sourced information you read online
* Your comment makes no sense in light of the establishment freaking out about "misinformation" online. If everybody already knew what not to trust, then why are they freaking out about misinformation spreading wildly on social media?
* You yourself claim that you find yourself believing unsourced claims at face value, so there's a contradiction there too. I thought you already knew what you shouldn't trust.
* There is no definitive source of truth that exists. Even if something is sourced by a "verified source", it can be false information, if even just by lying by omission. CNN, Fox News, Snopes, all politicians, and Facebook's fact checkers get things wrong or tell falsehoods. Trusting an "official source" is not a prescription to avoiding misinformation.
The hypothetical presented is essentially a highly regulated walled garden. "Bad guys" would be blocked or cited. If all content is tied to an ID, yes it would work.
China has shown that for the most part you can regulate the majority of discourse in an online space if you wall it off to the bit you control. Not sure how this is different.
Not that I agree with it. Not a fan of the suggestion myself.
If it's too hard to identify fake content, how do you identify bad guys, even if you know their IDs? What difference does it make what ID is attached to information if you can't identify it as fake?
If you can identify it as fake, the whole premise evaporates.
I don't think the premise evaporates.
There's a solvable difference between checking an ID against a database of IDs you control, vs identifying the fakeness of any other piece of random information.
If the 'state' pushes for digital identification for access to networks, we have a golden opportunity to create a public key-exchange mechanism as public utility, as a component of the digital id. If followed up with robust ad-hoc distributed VPN services/platforms, there will possibility for creation of impenetrable (modulo algos & execution) private virtual spaces.
> Maybe there is a way to build a new web, a new kind of social media using a
> hash graph to implement a decentralized web of trust, something that can
> allow content verification without forcing everyone to sacrifice their right
> to remain anonymous online.
Comparisons to OpenGPG are likely to make anyone cringe, but if you're willing
to suspend disbelief about the usability issues, it seems like such an apt
metaphor. Where signing a key is vouching for the owner's humanity, not their
identity. I can imagine websites integrating my browser to attach a "humanity
key" to the content I post, and my browser maintaining my collection of keys
and my preferences for divulging them.
Granted, pseudonymity is not the same as anonymity. Maybe if the meaning (and
lifespan?) behind these keys was constrained, then they could be sufficiently
disposable to approximate anonymity.
This system is either fundamentally flawed or already implemented, but I don't
know how determine which. Can anyone here share writing along these lines?
"Proof of humanity" is an active area of research, and there are several attempts at it out there. One that's worth looking at is BrightID[0] (although some people might be put off by it being blockchain-adjacent, in that it uses a DAO as part of its consensus-building system).
As for the difference between pseudonymity and anonymity, it's worth noting that once everyone has a cryptographic identity, we can start layering on clever things like zero-knowledge proofs, for example, which would allow people to issue themselves new pseudonyms that carry the same level of trust as their core identity, without ever revealing what that core identity is (or which other accounts vouched for the core identity to give it that level of trust).
the premise is clearly correct, the implementation could certainly be more nuanced however.
Rather than a blanket ban on anonymous accounts, how about read only until you verify ID? Or perhaps some way to tar pitting posts based on the degree of authentication of the account?
This way you can retain anonymity on sign up, but incentivise celebrity by increasing reach
OP here. I edited the post to make it crystal clear that I'm advocating against centralized e-IDs. The post is meant to read as a warning, not an encouragement to go in this direction. If you read this and you're feeling revulsed at the idea of a centralized license attached to your internet connection, good. You should be.
I didn't come up with the idea of centralized e-IDs, it was already out there. It's a terrible idea, but it's exactly the kind of solution governments could be likely to go for.
December 15th, 1791. As explained by SCOTUS as recently as 1995 (McIntyre v. Ohio Elections Commission) anonymous speech is baked into the first amendment, putting it at the core of american freedoms:
"Anonymity is a shield from the tyranny of the majority. . . . It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society." (Cut from EFF.)