"I don't like the content that ends up on + promoted through algorithms on Facebook, therefore, they should be responsible"
It's not their content, and they aren't the ones engaging with it.
They are pushing it because users post it and then engage in it.
Say we got rid of Facebook tomorrow. This exact same problem would happen at another company/platform unless there was heavy intervention/moderation (which to my understand, Facebook absolutely already does some level of moderation?). What's that equivalent to? I'm not a "free speech advocate" or anything like that but what you're asking for is censorship instead of asking users to bear the responsibility of the content they consume.
I wouldn’t consider only the content/feed selection algorithm - I would take a holistic look at how people interact with Facebook (like button only initially, then expanded to a few more choices; real identity, not many comments shown at a time), then also compare it to other sites and their outcomes like Reddit (up and down votes, anonymous if you wish, lots of comments shown at a time).
Regarding users bearing responsibility for what they consume - I have seen others report running into this and I have as well - no amount of feed grooming keeps the controversial content off my feed for a meaningful amount of time. I actively unfollow / snooze content I don’t want to see, but Facebook has countered this by adding “suggested for you” items to my feed. It’s impossible to only see my friends’ updates. I must be exposed to content selected by Facebook as a cost of using Facebook.
Most people use the word censorship to refer to banning some specific kind of content: banning what. Meanwhile, the discussion around Facebook is usually around how they measure and reward content, not what the content is.
If you're actually being sincere in this discussion, I think you may not see how engagement is just one particular kind of how and it's not a given that social media sites optimize for it. It was an innovation that some sites introduced because it benefited their business model and it caught on for a while. But it wasn't universally adopted, needn't be used at all, and is becoming less fashionable among both sites and users. Facebook's a bit behind the game on restructuring away from it though, possibly because of over-investment and momentum, or possibly because of being a closely-held company that can be philosophically stubborn.
FYI, "gets promoted" is the key phrase there, at least to the many people disagreeing with and downvoting you here.
Promoting that content is the "how" that Facebook has chosen to do and continues to do, and something that isn't a given for operating a social media company. That's what they're being held responsible for.
their content delivery/ranking algorithms prioritize (from my understanding as just a general person in the public who has never seen what's probably millions of lines of code across many different internal private systems at Meta) user engagement over all
If the algorithm is spitting out content that we're all critical of (because it's divisive, bad for mental health, etc.), why are we not acknowledging that it's just a side effect of
1. user input + 2. user engagement to that input = 3. ranked content delivery algorithm
If Facebook went away tomorrow, another platform would fill its place. Humans have shown that they like to doom-scroll on their phone and are ok being fed ads in the mean time because social media scrolling is basically the equivalent of an electronic cigarette for the brain in terms of dopamine delivery.
Because humans have chosen to prioritize the shittier meanier grossier nastier content, Facebook's "models" have been trained on that. Why are we criticizing Facebook's models that we basically played our own individual role in building?
> Why are we criticizing Facebook's models that we basically played our own individual role in building?
Practicality and precedent. We can effectively regulate the algorithms a countable number of companies use in their products, but can't effectively make people not susceptible to doom-scrolling (nicotine, heroin, gambling, whatever).
> If Facebook went away tomorrow, another platform would fill its place.
Not if the troubling practices were illegal or too expensive to operate. It would be a pretty dumb investment to bother. Instead, new platforms would pursue different techniques for content presentation and profitability, just like existing competitors already do.
> We can effectively regulate the algorithms a countable number of companies use in their products, but can't effectively make people not susceptible to doom-scrolling (nicotine, heroin, gambling, whatever).
What does regulation of Facebook's algorithm look like to you? Promote specifically which types of content less/not at all? Obviously "politically divisive/disinformation" but who gets to be in charge of that/classifying what is/isn't disinformation/divisive?
You keep missing that the problem is engagement optimization, not political divisive/disinformation per se.
A site that optimizes for content that people heavily interact with (engagement) is different than a site that prioritizes the relationship type between users (friends original content > public figure OC > shares > etc) or positive response (upvotes), etc
None of these many other systems rely on discerning the specific nature of content or making sure it’s of some certain arbitrary moral character. But they each lead the network as a whole to favor different kinds of content and so the experience for users ends up different. And it’s already known that social media sites can be successful using these other techniques.
> A site that optimizes for content that people heavily interact with (engagement) is different than a site that prioritizes the relationship type between users
I think that's hit the nail on the head.
In a neutral environment, a user would be shown a feed of friends posts with ads mixed in for their demographic.
In an optimised for engagement environment, a user is shown posts that are known to increase engagement and quite often those posts are inflammatory.
This engagement environment is much more likely to expose people to extreme views
"I don't like the content that ends up on + promoted through algorithms on Facebook, therefore, they should be responsible"
It's not their content, and they aren't the ones engaging with it.
They are pushing it because users post it and then engage in it.
Say we got rid of Facebook tomorrow. This exact same problem would happen at another company/platform unless there was heavy intervention/moderation (which to my understand, Facebook absolutely already does some level of moderation?). What's that equivalent to? I'm not a "free speech advocate" or anything like that but what you're asking for is censorship instead of asking users to bear the responsibility of the content they consume.