I've never used Facebook and never saw the point of it, but it is increasingly having an effect me in the physical world.
As far as I can tell, it's a bifurcating, hate-fermenting, disinformation platform that impacts elections and their sex-bot CEO seems completely inept at even recognizing the impact they are having on a democracy.
Instead of a well-informed citizenry, we've got an ill-informed one.
The ceo is well aware of the impact his firm is having on society. His focus on propagating addictive misinformation is strategic. They make money selling ads to eyeballs.
If they have more eyeballs clicking around to see more misinformation they make more money.
There is nothing surprising that a guy who put “I’m CEO B1tch” on his business card would care less about destabilizing democratic governments than making himself rich.
He’s been spending time learning mandarin. Probably a good investment of time if people try to tar and feather him Boston tea party style should he succeed in the destruction of the American republic for profit.
>There is nothing surprising that a guy who put “I’m CEO B1tch”...
I've commented on this before around HN, but worth repeating: people do change over ~20 years.
If I were judged by the shit I did in high school or my 20's, and assumed I was the same person I would be fucked. (I'm guessing many on these threads would empathize.) It just reeks of pseudo-cancel culture BS where one mistake will follow you for your entire life.
people generally tend to change when they experience the negative consequences of their action, not just willy nilly.
He hasn't just made one mistake, he keeps making the same mistakes. Merely a few years ago he ridiculed the notion that Facebook could have an influence on elections. Not only did that statement show that he hadn't learned, he also obviously thinks we're all idiots.
> people generally tend to change when they experience the negative consequences of their action, not just willy nilly.
I think you're discounting the effect of lived experience and natural maturity that comes with age. People and their perspectives do change in their own - not just because they made a mistake that impacted them negatively.
What you said makes perfect sense but I still refuse to use a platform whose founder called people “dumb fucks” for trusting him and using the platform.
Based on Mark's actions and announcements over the years, from Internet.org to his "The next decade" speech where he said they're building a network for all local businesses to "better connect with their communities," it's clear that Mark's One True Goal is that all digital information flow through Facebook, so that it can be analyzed, collated, and advertised against.
Mark wants every part of human society, from birthday wishes to buying a car to finding a local barber to political debates, to operate through Facebook's servers.
The real question is why - is it just about the money for him? Does Mark dream of being a trillionaire before Bezos? Or is it about control? Does Mark want to be The Guy Who Knows Everything?
How is Facebook different from Twitter or Reddit? I personally think Reddit is extremely dangerous because on the surface many subreddits appear lighthearted, but are actually gateways to extreme views.
/r/ActualPublicFreakouts promises some snarky laughs but is pushing the viewpoint that black people are inherently violent. And that is being heavily pushed by Reddit via the popular posts view.
Facebook has studied extremism on their platform and they disagree:
> The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
I really don't know how you solve this problem because finding people like you is like the #2 reason you even join a social network. Same with IRL social networks. I can't even imagine how much worse my life without the online queer community, it only exists at all because these kinds of algorithms connect people who would never find each other IRL because of fear and physical separation.
Plus this is a very narrow interpretation of the problem. The public discourse issues comes from friends, family, friends-of-friends sharing and spreading articles from the news, obviously fake rage-bait memes, and generally just having an audience for their opinions.
Unfortunately what is the alternative? Does every social network or forum after X users need to consider getting a disinformation center?
The only other way to scale this is to make it illegal to spread disinformation, this way citizens can potentially police one another by suing one another.
It happens in France for specific things, for example if you deny that the holocaust existed.
I don't like your solution and it definitely wouldn't fly in the US but your statement of the problem is correct. This isn't a "Facebook", "Twitter", or "Reddit" problem -- they just happen to be the companies with enough users for the problem to surface. I'm not really sure what people expected to happen in a digital pseudo-public square.
I know it wouldn't fly in the US, as seen per the downvotes I'm receiving, but people also say a lot of other things we see in Europe wouldn't fly in the US, and yet we see more and more discussions about these things here (drug legalization, free education, universal healthcare, gun ban, no death penalty, etc.)
Well then what constitutes misinformation? Is speculation misinformation? Are opinions misinformation? How does misinformation differ from someone exercising their right to free speech, but miswording what they say?
I think that is a slippery slope. In fact you just described communist russia where the government would get citizens to turn each other in for saying the wrong thing(this can easily be used for nefarious purposes). You are now allowing the government to suppress less-popular opinions by labeling them misinformation.
I see this slippery slope argument used a lot, but that pretty much impacts every single law. Every law has to make a decision on where to draw the line, and for free speech the US has decided for a very long time not to draw the line at all (and thus we end up with a lot of misinformation, hate speech, smear campaigns, internet bullies, etc.)
I have a feeling this is intentional. Facebook benefits from the current administration and likely wants it to continue, see their somewhat successful efforts at lobbying to get their competition banned.[0]
FB probably has the ability to sway the US elections to small but measurable degree, and I bet they intend to use that ability for their own gain. Oversight (even their own internal version) could put a damper on that, hence the delay.
How does the oversight board work? Does it monitor shenanigans in all elections worldwide? Does it care if the “good guys” and “good gals” are the ones engaging against less sympathetic adversaries? What’s the plan?
1984 was a warning, not an instruction manual. Want to really make the world a better place? Improve yourself and stop worrying about how wrong everyone else is.
Then again, that takes actual work and even scarier - self reflection! Oh noes! We might have to gaze into our own internal abyss?
Facebook feels more and more like illegal substances that alter ones mood or behavior. I hope Congress and the President are able to grow a spine and relentlessly go after FB — perhaps through regulation or something else.
Maybe I'm naively optimistic, but at least the scientific community has had centuries of amazing progress of approaching truth and steering away from misinformation. I'm not sure one could provide a similar example for the war on drugs.
Yes, because screeching “the science is settled” is how we advance science :p Scientific progress hasn’t always been simple or blessed by societal norms. Heck just go look up the history around plate tectonics - and that is fairly recent!
Sunlight is always the best disinfectant. Censorship never works. Indeed, it gives BS the mystical aura of “taboo” - “well if they are trying to suppress it, it must be for a reason”.
Personally if there are people peddling actual whackadoodle stuff I’d rather it be in the open so I can act appropriately.
And I think that’s the core problem - everyone is concerned about everything but “I”. For some reason our society seems to be pivoting from focusing on individual responsibility to wanting to police everyone but themselves. This is the path to destruction/tyranny.
The same people who (rightly) derided the moral majority busybodies from the 80’s now think they have the moral majority to do the same. It never ceases to amaze me.
I agree that censorship is likely not the way to achieve this, but the current social media model propagates the most absurd garbage exponentially while science / reality is left behind because of "low engagement". I feel there must be some way to address this without censorship.
Imagine, as a nation, being so deranged that you think your largest websites and corporations should be engaged in censoring speech that, in ordinary life, is protected by the first amendment, in order to support a specific political candidate.
Orwell and Huxley could not have dreamt up something as dystopian.
People want Facebook to censor vaccine misinformation. That has nothing to do with "a specific candidate". The issue is misinformation, which disproportionately benefits one side.
Also, just wait until you hear about movie ratings, video game ratings, censorship of profanity on TV networks, and people getting fired for racist beliefs.
The truth is that corporations always have and always will enforce a narrower range of ideas than the First Amendment allows. That's fine, because the First Amendment is about protecting us from bad government.
I think this is an interesting point, and I think most would have agreed with this position 15 years ago. Unfortunately the reality of these networks to propagate garbage which negatively impacted real life was too much for them to ignore.
As far as supporting one candidate over another, one candidates supports baseless facts being spread on these networks: QAnon, covid hoaxes, racist conspiracies, and the other doesn't.
At least with Facebook it's harder to do disinformation campaigns as they require a phone number to register, which would mean some outfit like the Russian IRA[0] would need a fuck-tonne of Russian-number phones to do influence operations, which would be a huge red flag and a sure-fire way of doing attribution to Russian state backed entities.
Twitter, not so much. I can still register an account with just an email address (and emails are easy to cook up en masse, unlike phone numbers), then I do my psyop campaign.
Are we really going to assume that nation-state entities can't send agents to buy burner phones in other countries or just downright spoof the phone numbers entirely?
I can go buy thousands of American phone numbers on Twilio right now, the only thing stopping me is my wallet.
Oh yeah because Twilio like to be partially responsible for election rigging. Being a dev at Twilio must be an interesting job: you get to witness the erosion of democracy!
and the idea that the "attribution" is a driving concern also ignores the fact that, well, nobody cares. there is universal agreement of Russia's involvement in 2016 and there was virtually no cost to be paid.
> there is universal agreement of Russia's involvement in 2016
I don't think there is. There's a lot of agreement that Russia (and others) tried to influence the 2016 election, but there's no agreement at all wrt how much influence they had. The same is true for Cambridge Analytica: widespread agreement that they did stuff, not quite as much agreement on how effective it was.
Twitter lets you register with just an email address, but will require a phone number to continue to use the account almost immediately upon actually using it.
US phone numbers are easily available for under $1/month from various services, and you don't need the number for a whole month. They're cheaper if you're willing to do some more legwork.
Requiring a phone number reduces casual abuse, and increases costs for abusers, but is only really effective if the abuser's gains are in the same neighborhood as the costs, in which case they'll move to somewhere they can gain more or spend less.
As far as I can tell, it's a bifurcating, hate-fermenting, disinformation platform that impacts elections and their sex-bot CEO seems completely inept at even recognizing the impact they are having on a democracy.
Instead of a well-informed citizenry, we've got an ill-informed one.