Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is Facebook actually moderating in good faith though?

Consider that divisive, offensive and false content is guaranteed to generate engagement and thus contribute to their bottom-line, while content that doesn't have these traits is less likely to do so. So they're already starting off the wrong way here, when their profits directly correlate with their negative impact on society.

Consider that there is plenty of bad content that violates their community standards on Facebook and such content doesn't even try to hide itself and is thus trivially detectable with automation: https://krebsonsecurity.com/2019/04/a-year-later-cybercrime-...

Consider that Instagram doesn't remove accounts with openly racist & anti-Semitic usernames even when reported: https://old.reddit.com/r/facepalm/comments/kz10nw/i_mean_if_...

Is Facebook truly moderating in good faith, or are they only moderating when the potential PR backlash from the bad content getting media attention greater than the revenue from the engagement around said content? I strongly suspect the latter.

Keep in mind that moderating a public forum is mostly a solved problem, people have done so (often benevolently) for decades. The social media companies' pleas about moderation being impossible at scale is bullshit - it's only impossible because they're trying to eat the cake and have it too. When the incentives are aligned, moderation is a solved problem.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: