I'm glad to hear that an experienced editor sees HN that way, and also that you see evidence of a repeatable approach. Those are good signs.
As I'm sure you know, it's a more complex problem than just being willing or unwilling to put up with bad behavior. There are costs to addressing it—quite a few, it turns out, no matter what approach one takes.
Our plan is to move HN toward more community self-regulation. Each step we've taken that way—e.g. when we added vouching for dead comments—has worked well. I doubt the community can become completely autonomous (though a mod can dream), but we're pretty sure it can go further in that direction, especially if we do a good job of specifying what kind of site HN is, and isn't, supposed to be.
I can often be an asshole on Reddit. But I never am here because HN is so clean and well kept. The signal to noise ratio is fantastic and I'll be damned if I'm going to be the one to lessen it.
Kind of amazing to me how it works that way: once there's graffiti on the wall, suddenly the perception is that it's okay to put graffiti on the wall.
IMO the biggest challenge of self-moderating communities is avoiding the echo chamber effect: ideas surviving and spreading due to their popularity, rather than their merit.
I'm curious if you agree that this is a problem, and what your plans are to address it, if you do.
I agree that it's a problem, one HN already has. I doubt we can solve it. We can maybe mitigate it, or at least not make it worse.
One tactic for not making it worse is to introduce changes slowly and with lots of time to assess the effect of each change. We're also willing to get procrustean, if we have to, about lopping off any changes that have bad effects. We've done that twice that I can think of.
Any new form of community moderation will come with a mechanism for overseeing it. For example, when we added vouching for users to rescue dead comments, we also added a review mechanism for moderators to catch vouching being abused—e.g. dead comments being rescued when they ought to stay dead because they violate the site guidelines. This combination, which was inspired by the flagging mechanism, turns out to work well. HN's userbase has vastly more capacity to sift through comments than moderators do, and makes good decisions most of the time. Meanwhile it's orders of magnitude less work for moderators to review vouches and flags than to read all the comments directly, and doing so lets us correct the community decisions that don't align with the site guidelines. Such an approach benefits from the strengths of both sides, users and moderators.
I joke that our goal is to turn HN into a pyramid scheme such that the community does all the work and we can just monitor things from time to time, but the real joke is that a 'pyramid scheme' in this case would amount to the holy grail of a self-regulating system. In my wildest dreams it would work à la the Tao Te Ching and simply run itself with the occasional tweak. But I don't expect to get off that easily.
The kind of community moderation we want has to do with protecting the values of the site—intellectual curiosity and civility—rather than particular content people post or views they hold. If you were to argue that those things can't completely be separated, I wouldn't disagree. But I do think we can influence it by how we design the software, and more importantly by how we clarify what the values of the site are. HN is getting to the point where we need a new round of specification about what the site is and what it is not. That's coming soon.
Indeed, one need look no further than Slashdot to see this in action. 100% community moderation produced a comment section that is all confirmation and no information.
That's a fundamental point and one we can do more to clarify. It's in everyone's interest not to discredit their own argument by going about it uncivilly.
I wonder how good or useful a civility algorithm might be in this case? Aren't sentiment analysis tools pretty good now? Finding examples of rude and uncivil comments on the Internet certainly won't be a problem.
How on earth do you propose to separate merit from popularity?
Besides, there's also the question of "for whom is this a good idea?" especially in the context of things like Uber and SF property values discussions. It matters which side of the proposition you're on.
To the extent that there are, that would really limit the scope of discussion. There's no objective morality and no objective politics. Even if we try to limit discussion to "facts", things fall apart fairly rapidly once we start talking stochastic events and statistical morality.
Rationality can tell us about likely means to achieve goals and give a probability distribution of outcomes. It can't tell you what to want, what is right, or what to risk.
>There's no objective morality and no objective politics.
Really? The majority of academic philosophers disagree with you, and they have very good arguments for moral realism.
One thing to think about: it's not possible to affirm objective standards of rationality and deny objective standards of morality at the same time without being inconsistent, since both epistemology and ethics deal with normative statements (what one ought to do), not fact statements.
The truth of moral realism is a meta-ethical fact that doesn't preclude debate and experimentation regarding what the Good actually is. Just like how the existence of an objective material reality is a fact assumed by science without precluding scientific debate and experimentation regarding the nature of that reality.
Asserting the existence of moral realism without proving a specific moral reality is acceptable.
The closest thing I can think of in computing theory would be that the statement "A perfect player playing Go will never lose when going first" is obviously accepted as true, despite the fact that creating a perfect Go player is an open problem.
I think you're really overestimating the relevance of moderation on that, in lieu of, (for example,) the education level, ages, and number of people and their investments in the community.
Communities run in a tight feedback loop with their populations; when a community goes in any direction, some people who don't like it will leave (and some outsiders who observe it may refuse to join), which pushes a community further into that direction. People within the community like HN are all self-selected, so the education level, ages, etc. depends strongly on the direction the community is going.
How are you measuring success? I feel that the quality of HN has declined a lot over the last year or two; we see a lot more one-liners and "couldn't resist" comments, and less technical content. (I think the caustic attitude of old HN helped; people who made jokes used to be shot down very quickly, whereas now people try to be nice about it). I now consciously avoid the site while America is awake.
> I now consciously avoid the site while America is awake.
Is this one of those troll comments to which we should respond by saying something nice? Because if it were an American making a similar comment in reverse, then it would be buried as jingoistic/xenaphobic/racist/etc.
I don't see how an American wanting to avoid comments from another continent would make them afraid of a warrior princess.
On a more serious note, I too have noticed a shift in the comments on this site; I really enjoyed the fact that HN was a more "serious" place for sober discussion, where most comments were useful (larger signal-to-noise ratio). This culture made me reconcider my own comments on a few occasions.
EDIT: Not saying that this is a huge problem - just a development I don't enjoy.
Serious answer: Cultural differences DO exist. I come from a place where, confusingly, passive-agressiveness is preferred over direct confrontation, AND sarcasm is used very seldomly and only very obviously.
Both of these things put me on edge with americans who, on average, despise passive aggressiveness and use sarcasm as if it were the salt of conversation. The result is that i have to think in a completely different mindset to talk to US people, to avoid misunderstanding them or having them be disgusted by the way i talk.
I suspect, though I don't know, that the difference I perceive is more a factor of what kind of moderation happens in which timezones than a difference in regular users.
You guys do a great job with HN moderation and I've always been curious what ratio of mods:numComments is sustainable and still keep quality high and mods not burned out - do you think HN is a good datapoint or just special?
More mods doesn't necessarily mean more moderation, or the ability to moderate more content. From what I've seen it just means the moderation happens quicker and the mods pay less attention because "someone else will pick it up".
This does kind of make sense; excepting some special moderation view that allocates posts directly to each mod, they're all viewing the forum in the same way, posts bumped to the top or sinking to the bottom, so things will get the same amount of attention or outrage either way. Speaking again from experience, it's all dependent on how long that fire's burning before a mod sees it. A crappy comment deleted quickly causes little fuss, a crappy comment left to fester for a couple hours then deleted will still have the aftermath hanging over it. Even if the entire thread is purged, it still exists in the memories of users and effects the overall tone/behavioural standard of the forum.
In that case the answers are I don't know. We don't have many moderators, and I mostly think it would be a mistake to try to solve HN's problems by adding more official moderation.
It's hard and not always very gratifying work, which means you either have to pay someone to do it (as YC does with HN) or rely on volunteers who will extract other compensation from the system.
Then there are many psychological costs. I'll list three. Often the people whose comments are being moderated do not feel that their comments are bad. They can become harsh, which can push your buttons. With the law of large numbers, any button you have will eventually get pushed sometimes, and when that happens it doesn't feel like a mere statistical process that's afflicting you. It feels more like an evil genius or demon that has perfect knowledge of how to drive you crazy. So you literally can feel driven crazy every day. (Edit: more on 'demons' here: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so....)
Second, an even harder problem is that people want explanations for why their comments are being moderated. It's hard to provide precise explanations in every case. When you get it right, it's helpful—at least to the general audience, and often to the commenter too—but it takes a ton of energy to get right because there is endless variation and nuance in these things, and the cost of being even a little wrong is high: it causes hurt feelings and a general sense of injustice. On the other hand, if you don't provide explanations, people feel that you're being arrogant and heavy-handed.
Third, every mistake you make will be jumped on by lots of people. Their intentions are largely benign, but it's hard to remember that when your mistake is being jumped on. I could continue!
There are different strategies you can take to this, but they all involve tradeoffs. When PG was moderating HN he relied on software to do as much as possible and did the human part with a minimum of (i.e. usually no) explanation. That led to complaints about lack of transparency and so on, but given that he was both building YC and raising small children at the time, there wasn't an alternative. By the time he handed HN over, the community had reached the point of needing a different approach. But no matter what approach one takes, there are these sorts of costs. I think the trick for keeping going is to design the system to have rewards as well as costs, but that's a different conversation.
(Edit from when I ran across this years later: from a 2020 perspective I would say that of the three psychological costs, #2 is the hardest because it consumes by far the most energy. #1 and #3 can be worked on internally, but #2 requires a lot of careful external communication, often under high-pressure conditions. I'm still thinking about what can alleviate that load. It's a bit of a double bind, because if you answer every objection and protest, it's draining—actually it's physically impossible—but if you don't answer, then people will imagine their own answer, invariably a hurtful one, and go away firmly believing it and feeling aggrieved, which is a bad outcome in its own right and often comes back to bite you later.)
Thank you for doing a great job dang. When you're doing your job right, it becomes invisible. I for one, tend to forget that the great community I enjoy here doesn't just happen all by itself.
As I'm sure you know, it's a more complex problem than just being willing or unwilling to put up with bad behavior. There are costs to addressing it—quite a few, it turns out, no matter what approach one takes.
Our plan is to move HN toward more community self-regulation. Each step we've taken that way—e.g. when we added vouching for dead comments—has worked well. I doubt the community can become completely autonomous (though a mod can dream), but we're pretty sure it can go further in that direction, especially if we do a good job of specifying what kind of site HN is, and isn't, supposed to be.