Hacker News new | past | comments | ask | show | jobs | submit login

Well, I'm going to be somewhat controversial here and say: What the fuck were people expecting?

Moderation is hard, genuinely hard, every order of magnitude increase in community size is not linear to the moderation requirement: it is factorial.

Why? Because every single communication has the potential for abuse, and the number of interactions on a platform do not scale linearly with the increase of users.

This is why things like the "Eternal September" exist, a deluge of new users is basically impossible to moderate at scale.

I think Twitter, Facebook and co. have done a fairly decent job of the mess they made, but crucially they decided that a walled garden where everyone exists together was their business model.

I think this is fundamentally flawed. "Back in my day" (I know it may be glazed with nostalgia, but) smaller close knit forums were much better at moderating communities, because it was still humanly possible.

There did exist some communities which became tyrannical; but the benefit of small communities is that people just go wherever it's "nice enough", and if you don't like the moderation staff or how they moderate you can move on with your friends.

I think people don't want this to be true, people are so financially invested in the centralised model; but ultimately you force a single set of potentially tyrannical moderators and a single culture. -- and people aren't willing (or able) to pay for the correct level of moderation.

It's Sisyphean and totally self-inflicted.




I was thinking about forums and how much better they were the other day. One reason, the main reason, was because they learned lessons from Usenet and stopped any flame wars in their tracks. These social media networks didn’t and have let our entire society devolve into a giant flame war. We need systems to force people out of heated conversations and ensure they are engaging in good faith. This requires human moderation but we could surely build tooling to detect if a conversation is heated and force people to take a breather.


> This requires human moderation but we could surely build tooling to detect if a conversation is heated and force people to take a breather.

But who gets to decide what a heated conversation looks like and who should take a breather? This is the fundamental problem that we can't seem to agree on as a society.


We do agree though that almost all political discussions are flame wars, we just think the other guy is the problem. I think just adding cool down periods when two people are rapidly responding would go a long way, as well as giving users a timeout from one another if this happens often.


My personal opinion is that not being able to see the other person's face when you're communicating with them is a big part of the problem. Perhaps a cooldown period would help and I'm all for trying to find ways to improve discourse online, but at the same time, the psychology of being able to say whatever hateful thing you want because the other person is hundreds if not thousands of miles away drives a lot of this behavior. This is true of forums, gaming chats, etc.


> This is why things like the "Eternal September" exist, a deluge of new users is basically impossible to moderate at scale.

Great reference. I used and liked Usenet a lot in the early 90s. I lurked in the comp.* and sci.* groups among several others. Sure, I knew there were nasty crazy things on alt., and as a 13 year old I looked at some porn in there.

Usenet was not "centrally" moderated, and it was fine. There was spam sure, but with a suitable client with spam filter, things where good.

In my view, moderation has to happen at the edge, and not in the center. People should be able to post whatever legal stuff they want in those type of services, in the same way anyone can go to a public park and shout/speak whatever crazy things they want. Now, if you start to pee* in public (illegal) or post something illegal, then the police should investigate and get you for committing an illegal act, but there's not reason why there should be censorship of everyone for the possibility of someone committing a rime.


> every order of magnitude increase in community size is not linear to the moderation requirement: it is factorial.

> Why? Because every single communication has the potential for abuse, and the number of interactions on a platform do not scale linearly with the increase of users.

I'm not sure this is correct. It sounds like your underlying model is "number of users N, number of potential interactions N x N." But people have finite time and resources. Every user can only post a maximum of T times a day, where T is some constant. So I think the number of actual interactions is linear in N.


You’re thinking of 1:1 communications, I would guess.

In reality twitter is 1:n relationships.

Content that is interacted with may lead to new interactions from unrelated people. So it’s really n:n.

A persons posting time, in any event, easily approaches one’s ability to moderate it. It’s very easy to spew content and requires much more effort to analyse and weight it.

I sincerely believe it’s not in step with the growth of users, instead it is exponential.


> Well, I'm going to be somewhat controversial here and say: What the fuck were people expecting?

Everytime some idiotic thread shoots up with 1500 comment I just assume it can all be summed up with a line like this.


It's possible. As a thought exercise, I'm sure Amazon has figured out some sort of formula to prevent the sale of illegal items on it's marketplace, which it has to operate at scale. At the very minimum a structure like that could be put in place. Another approach is the community of moderators that Reddit uses.

So many Silicon Valley companies launch products designed to scale without any regard for social impact - it's time to move beyond that myopic pov. It's not someone elses problem.


The incentives for bad actors are also different too. Imagine the difficulty to craft a bot ring that can reach into and affect thousands of forums to reach millions of users. It didn’t ever happen because it was too complex of a problem. Meanwhile, you throw all these millions of users in one basket, suddenly you only need to develop a botring for one network and the incentives are also sky high because how many people you can reach with that one botring.


If moderators are good, why cant you just hire more moderators?


Money. Moderators are expensive and if you want it to be any good you need people from the culture you're moderating to understand the context of what is and isn't abusive in a given language. It's also an absolutely terrible job because you're just sifting through the absolute worst of the content on the platform all day.


Reddit has free moderators


Generally either the users or the admins are unhappy with the moderators of any given subreddit. They're on possible way around it but not a particularly good one and it's less likely to scale because there's not the same "ownership of a community" feeling you can engender in Twitter where there's not really an equivalent to subreddits to give volunteer mods control over.


And reddit has not solved the problem.

To many, reddit is just as unpleasant, and in some cases worse than twitter.


Moderators on forums had to handle maybe 100 messages per day. They knew the context of each discussion thread and could make pretty well-considered and nuanced moderating decisions.

Twitter receives something like 500 million tweets per day. So if you had a million paid moderators, maybe they would be able to keep up with the sheer volume. And then you'd still get people arguing either side, too much or too little moderation/censorship. Corporate bias would be attributed, rightly or wrongly, just as it is now.


Because moderators need to be competent, understanding and honest. They will not come cheap and these apps have billions of users in some cases.


As the GP stated repeatedly, it doesn't scale. Number of interactions scales with the factorial of users. The moderation team itself also doesn't scale. Good moderation is very difficult and requires trust. More moderators spreads the trust thin and greatly increases the chance that you end up with one or more bad moderators, who in turn damage that trust.


> Number of interactions scales with the factorial of users.

That is just not true. That's a count of possible relationships. There is a limit to how many interactions a human will perform in a day, and it's not related to how many other users there are on the platform.


The motivations are different, but I think wikipeida is an interesting example where editing / moderation does (mostly) work. Again, I can't see how that could ever work on twitter, but wikipedia is the only large-user-base example of "social media" that isn't horrible.


I think that is because every contributor is responsible for the whole, and thus they are also all moderators, and all responsible for any content digressions.

If someone posts a hateful tirade on twitter, it's no one else's responsibility but twitter's really. It's their account, and Twitters platform.

Of course, if you gave Twitter uses the ability to self moderate, it would be an absolute mess.


If one woman can make a baby in 9 months, why can't 9 women do it in one?


Moderators are not identical. One moderator's ban is another moderator's timeout.

C'mon. You know this.


The combinatorial explosion mentioned in the above post - it would be bad enough if the requirement for moderators expanded linearly with users, but it actually expands exponentially, more like with interactions.

Plus, Musk tweeted that he's eliminate bots or die trying. If he really means this, it would be a truly great contribution to twitter -- free speech is one thing, but amplified disinformation is another. But, this apparently requires levels of effort beyond all the social media today, especially when it is not just automated bots, but also paid troll farms grinding out disinformation and deliberately undermining communications.


Facebook does this no? I thought the issue was you effectively have to be a psychopath to do the job because of how horrific the unmoderated internet get.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: