Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been framing this whole thing as a universal property of human society and it seems to fit pretty well for me.

Outrage attracts attention in all group interactions. I can't think of a single large scale group forum where this isn't true. It's integral to an absurd degree in our news cycle. Howard Stern exploited this property in his rise to fame. It's a core element in state propaganda, well documented throughout human history.

I'm old enough to remember when the internet was a lot more free - when there generally wasn't some parent corporation imposing content censorship on what you put on your homepage, or what you said on IRC. All of the complaints regarding Facebook were true of internet communications back then too (on the "sex trafficking" issue, compare to Craigslist of yore!)

The big difference seems to be there's an entity we can point a finger at now. Communications on Facebook aren't worse than what was on the internet two decades ago. In fact, they're far, far more clean and controlled.

What I look to is whether Facebook is more objectionable than alternative forms of communication, and I can't find any reason to believe that this is the case. Is twitter better? Is reddit? Is usenet? No.

So why does Facebook draw such ire?

Are people calling for controls on Facebook also calling for controls on self-published websites? On open communication systems like IRC or email? Where is the coherent moral philosophy regarding internet speech?

To be honest, my biggest concern when I read the news surrounding this issue is that most of the internet might not be old enough to remember what it means to have a truly free platform, unencumbered by moralizing. Why are people begging for more controls?



I think a lot of folks forget that Facebook wanted to come in and clean up some of the filth in social media. They felt that by attaching your _real_ name to your posts, instead of a handle as was the traditional practice, that you would have something to lose (social standing, esteem, etc) and so you would be more thoughtful about your actions. The contrasts at the time were reddit, SomethingAwful, and 4chan. There was _definitely_ extant toxicity on the internet and there were funny posts in the early days of GMail that you could stop them from displaying ads by inserting lots of expletives and bad words in your email (and so some would have GMail signatures that just lumped bad words in together and explained it as an ad circumvention thing).

But I think there are a few key innovations that make FB worse for human psychology than previous iterations. Chief among them is the algorithmic newsfeed designed to drive engagement. Outrage certainly provokes responses, but in a chronological feed situation, eventually threads would become so large that the original outrageous situation would be pushed far back and the outrage would go away. Algorithmic newsfeeds bubble these to the top and continue to show them as they get more comments/retweets/shares/etc. They reward engagement in a visceral way that offers perverse incentives.

Secondly is the filter bubble. By showing you content hyper-relevant to your search interests, you can easily fall into echo chambers of outrage and extremism. Internet communities, like IRC channels, had huge discoverability issues. Each community also usually had disparate ways to join them adding another layer of friction. Even if you were an extremist it took dedicated searching to find a community that would tolerate your extremism. Now mainstream platforms will lump you into filter bubbles with other people that are willing to engage and amplify your extremist posts.

Combine horribly narrow echo chambers with engagement-oriented metrics and you'll have a simple tool for radicalization. That way when you're thinking of committing a violent act because of the disenfranchisement you feel in your life and your community, you'll be funneled to communicate with others who feel similarly and enter a game of broad brinkmanship that can quickly drive a group to the extreme. Balkanization and radicalization.


> I think a lot of folks forget that Facebook wanted to come in and clean up some of the filth in social media. They felt that by attaching your _real_ name to your posts, instead of a handle as was the traditional practice, that you would have something to lose (social standing, esteem, etc) and so you would be more thoughtful about your actions. The contrasts at the time were reddit, SomethingAwful, and 4chan. There was _definitely_ extant toxicity on the internet and there were funny posts in the early days of GMail that you could stop them from displaying ads by inserting lots of expletives and bad words in your email (and so some would have GMail signatures that just lumped bad words in together and explained it as an ad circumvention thing).

This is such a great point. The pre-Facebook Internet was full of anonymous random garbage. But everyone knew it was inconsequential garbage. Adding real names and likes changed all that: today garbage has gained legitimacy and is displacing prior forms thereof.


"the outrage would go away"

If there's one thing I've learned it's that the outrage never goes away. The type of people who fixate on outrage in their Facebook feeds are the same type of people who decades prior would cruise around town picking fights in person. I'm unconvinced that Facebook is meaningfully changing this dynamic.

I'm also unconvinced that the filter bubble is meaningfully different than what's come before. Humans have been sorting themselves into like-minded communities since before we could read and write. Do you remember the hive-minds of the 80s and 90s? If anything they were far more extreme because of the difficulty in proving anything, back before google and wikipedia. There was a lot more extremism and hate based violence back then. A LOT, LOT more, and no interventions like Facebook is at least attempting to provide.

Facebook has some new angles on old patterns in human behavior, yes. I think the people who're trying to show that it's made things work have a lot of work to do to make a compelling case. Facebook's biggest transgression is probably that it has chronicled this behavior and has dragged it into the light.


Very well put. When people say "it's always been like this" or "it's no different than X" – this is exactly the difference, and while fundamental human behaviors or impulses haven't changed, the design of the platform is changing how they are expressed.


We used to solve this problem by teaching people to have thicker skin so that we control the outrage regardless of the forum in which it occurs.

However for the last 10 years or so grievance culture has taken root and not only excused outrage, its proponents have actively encouraged it.

It makes me think of that scene in Star Wars where palpating is like “good, good. Let the hate flow through you”, expect we now have millions of people encouraging this.

How I wish we could rewind things to a world where foregiveness was still a virtue and we were all taught that sticks and stones may break our bones but words will never hurt us. Without such virtues, a world with outrage is inevitable.


I think this is an important point indeed. A piece of this puzzle, in my opinion, is that people are not taught this at home anymore. Most familes have both parents working full time and they're exhausted after work. Their kids are raised in daycare and neglected. And so many are raised in divorced/broken/separated/single parent households that compound the problem much more.

Furthermore, most of the US isn't religious anymore. These values and maxims mentioned above are not taught to people anymore, at least not to the degree that they were in the past.

A piece of this should be better training in the home for kids on how to understand the internet. To avoid being hateful and to question things. But so many kids are left to their own devices without parental oversight on this subject. I've even heard the call recently that parents want high schools and colleges to start teaching courses on how to avoid harmful content and misinformation online.

In what feels like ancient history, this used to be the parent's job, before both spouses were working full time.

Our kids and the younger generation suffer from lacking parental instruction on this.


> But so many kids are left to their own devices without parental oversight on this subject.

It's very "Lord of the Flies", isn't it?


> Communications on Facebook aren't worse than what was on the internet two decades ago.

Let's not underestimate the degree to which 'likes' (social affirmation ersatz) are eliciting the worst in people.


My point is that Facebook likes are simply a manifestation of a ubiquitous social characteristic.

We all get likes. Sometimes they're called upvotes. Sometimes they're called replies. Sometimes they're cumulatively seen as our status in the social pecking order.

Facebook doesn't add anything truly new or transformative here. These problems and patterns are ancient.


The patterns and problems are ancient, but convenience is a significant factor in terms of enablement and resulting harm. Humans and other animals have been vulnerable to addictive substances for as long as we can tell, but the level of effort needed to get high was much much harder before we learned how to process and distribute addictive drugs cheaply and efficiently.


Usenet isn’t social media and doesn’t have a feedback/reward system.


Usenet certainly does have a feedback/reward system. All group social interaction does. Trolling for feedback/reward predates Facebook by not just by decades, but millennia.


No one is calling for the internet to be less free, or have more constraints. They're calling for specific platforms to alter their interactions model to discourage toxic group behaviors at scale.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: