Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Misinformation fuels outrage and mindless social-media shares – study (northwestern.edu)
84 points by gnabgib 7 months ago | hide | past | favorite | 80 comments


Social media/ad tech architecture has to change.

Dont tell people to change. Its a red flag of intellectual bankruptcy.

Its a get out of jail free card for mindless corporate robots. The clergy used to provide these cards to feudal lords once upon a time. Today the media and edu system provide it to corporations.

More content = more ad real estate.

If you have a magic machine that produces more and more rent collecting real estate what happens to the value of real estate?

Meanwhile the total pool of available human Attention doesnt grow.

If you further filter that pool by global avg disposable income you find its a very small pool. And its being over fished massively. (Check the UN report on the Attention Economy)

This is a bizarre abnormal system that produces accumulating consequences.

Revenue of ad tech keeps growing. Content keeps growing. It keeps getting more addictive. Outrage is the natural output of such an enviroment that has been imposed on everyone. Politicians have to spend mpre and more on Ads to get elected. Becoming more and more on dependent on money to win. So they are struggling to counter balance these forces.

We have to treat Attention the same way we treat Land, Air, Water or Natural Resources. Companies cant pollute or exploit things without limit. Ads cant be injected in every 2 mm of space available and every 2 ms of time.


> Dont tell people to change. Its a red flag of intellectual bankruptcy.

But people do have to change. The core problem is a transition from:

- sources being few and somewhat reliable (traditional media), and had reputations to uphold - normal people faced significant barriers to sharing information widely

to a world in which neither of those constraints still holds. And there's no going back, no putting the genie back in the bottle. It turns out that when you give everyone unlimited communication, it exposes weaknesses in how people and societies consume and manage information that were present before but didn't have the environment they needed to materialize. Well, they have that environment now.

There's no going back really unless you want to clamp down on free sharing of information and the "peer to peer internet" in general. People will just have to change their relationship to information. It will take time, but I'm sure we can manage.

Social media and ad tech certainly aren't helping but the root cause is basically technological change and unless you want to roll that back, targeting the companies won't fix it.


We had free sharing of information on the internet before algorithms supercharged ragebait to farm engagement.


> sources being few and somewhat reliable (traditional media)

How do we actually know they were reliable, and not just the only game in town?


It doesnt make sense how this is sustainable. Just because it has existed for the last 20-25 years doesnt mean anything.

TikTok and Twitter are evidence such systems are not as stable as people think.

The paradigm shift where everyone is allowed broadcast is only possible due to Ad Tech injecting Ads everwhere and at all points in time. And then convincing everyone all this ever expanding Ad inventry has value no matter how much this inventry grows.

This facade is holding for a couple of reasons that have nothing to do with the value of the Ads, or for the need for all to have broadcast capability.

They tell Kamala Harris just outspend Trump on Ads and you have a shot and she does. She looses and no one complains about the billions in Ad spend. Why? The answer is exactly the same as asking why did everyone get into sub prime mortgages?

Another reason facade is holding is there are only 2 options for huge reach Alphabet and Meta. If Pepsi increases spend, ad execs immediately call up Coke and tell them to match it. Whole bunch of corps are trapped in this hamster wheel.

Then there is the great strategy of AT&T, Comcast, Amazon, Apple etc who all got into running their own content factories to "bundle" "cross sell". This creates all kinds illusions about the real value of content.

Meanwhile Netflix, Disney etc representing people who can actually pay for content have topped out at a few hundred million globally.

So people who say we cant go back dont fully know where we have reached.


> Dont tell people to change. Its a red flag of intellectual bankruptcy.

Are you telling people to change?


The problem persists because nobody has managed to put together the incentive and the ability to fix it.

People won’t fix it, because people is a big amorphous blob that doesn’t have an ability to really focus on anything at all.

Ad and social media companies won’t fix it because fixing it would mean removing themselves from existence.

The best chance we have is OS companies—Apple has managed to built this reputation (however flawed) of being a more thoughtful and privacy preserving company. Unfortunately Microsoft forgot how to make operating systems.


These are all banal platitudes built on the same basic rage that you purport to dislike; the solution you imply requires centralization and censors.

The only solution is also the easiest; switch it off.


This same relationship between emotion and inhibition of critical faculties plays a large part in the high error rate of justice systems around the world. In the extreme resulting in the bizarrely illogical "someone has to pay" sentiment of vengeful victims and jury, almost to the point they are aware their instinct is wrong but would prefer to live in the fantasy of the closure it offers.

Perhaps some good can come out of a technology over exploiting this human weakness, by forcing society to learn how to combat these weaknesses where it's even more important.


In many ways, vigilantism helped build higher societies. No tribe emerged from the primordial aether with bureaucratized and abstracted concepts of legality and ethics which in any way resembled high civilizations.

Vigilantism was one of the costs "priced in" by social codes. If you screwed a man hard enough, he might just come for you. You can see this reflected in folk wisdom such as "nothing more dangerous than a man with nothing to lose". That cute little adage isn't saying "oh, those with nothing on the line try harder", it's warning that once someone has no chips left on the game board, they might flip the table.

History is replete with this echo: "surrender with honor", "allowing an exit", "chose to retire", etc. In our very safe, very insulated, very civilized world, these might seem quant or be accepted as received moral precepts, but they are based on a firm inherited calculus: every man, no matter how reasonable, has a point where he'll turn to tooth and claw, if only to make you bleed with him.

We might think of our age as one built on these costs, but so managed that we no longer pay them. Unjust words no longer result in duels, but we say slander is bad. The shopkeeper no longer has to beat thieves, because the justice system punishes theft. And the tribe doesn't have to mob-justice the degenerate who wrecked the common green, because social shame makes the idea of transgressing abhorrent, well before the thought of violence arises.

Or rather... our society was functional. For a blip of time in the late modernity, the West managed to reach an amazing peak. We'd priced in vigilantism to our social systems so efficiently that the vigilantism was no longer ever actioned. Unfortunately, over time this lead to virtuous violence being classed as mere violence, and morally equal to unjust violence. Like many modern ills, this wasn't felt at first, as the moral inertia of the old system propelled behavior that was no longer incentivized by the current market. Slowly, though, new actors came about in an age with new incentives, and they better adapted to their environment.

If a powerful CEO can get ahead with profit maximization through legal murder, and the only penalty is a fractional hit on a rocketing upward graph? Why not? The only external cost is some disapproval from people he doesn't care about. He's optimized for his environment, because he's recognized that the formerly "priced in" vigilantism behind that disapproval has been replaced with a memory of said vigilantism, like a vestigial clause in a defunct contract.

We've been at this point for some time. The grinding inhumanity of our era would have brought our ancestors to violence long before. We've constrained this through the anti-agentic "all violence is (equally) bad" propaganda.

So yes, this murder, and moreover, the general public acceptance, is indicative of social degeneration. But it's not a descent from normality. It's a return. A ball was thrown into the air, and for a time (the late twentieth and early twenty first centuries) it seemed to hang suspended at its apex, but it has now begun to fall.

The proper response is to recognize where we are, what we are, the historical anomaly we are exiting, and decide how to proceed.


> No tribe emerged from the primordial aether with bureaucratized and abstracted concepts of legality and ethics which in any way resembled high civilizations.

Why do you say that? Ethics, laws, government, etc. are universal to human cultures, current and historical, and less and more developed (per leading research on cultural universals, e.g. Donald Brown).


They sure are. At the end of a machete usually. Non-violent order is an emergent higher order phenomena.


The previously lawful kinds of non-state violence were (more or less) permanently outlawed in proto-Germany in 1495. It was called "Ewiger Landfriede", ~"Eternal land peace". I guess it's an interesting lead-up to modernity. Before looking it up, I expected something like 300 years ago or more like 1000 years.


> Non-violent order is an emergent higher order phenomena.

Could you share a basis for that? What I've read from people who study these things - what is universal across cultures - say unequivicably that it's wrong.

Perhaps it is our bias that only we 'advanced' people could do that.


Vigilantism is fundamentally more equitable as it doesn't require giving one specific class of people the right to enforce the law, which they'll in turn abuse (i.e. the police): everyone has equal rights to enforce the law.


It depends on power; there are no rights or law at all.


This is nonsense. If you remove centralized administration for enforcing laws and allow everyone to enforce them for themselves you get a society in which one can only enforce the laws on those less powerful and thus you again get one specific class of people with the ability to enforce laws, which they'll in turn abuse (i.e. the rich and powerful) but only worse.


That’s the mafia state.


I never could understand how misinformation gets spread so easily via social media, until the pandemic. My friend basically had a slow breakdown, getting more and more scared and angry and upset. She'd be hooked to the social feeds, and re-share the scary ones that sounded plausible, but often they were fallacious, misleading, exaggerated. She was just trying to cope, and she thought she was helping other people by re-sharing them. Eventually she had to quit socials entirely, and it took a while for her to emotionally recover, but she's better now.

I know this is just one example, but it feels very much like an endemic problem I call "media sickness". The media [in all its forms] is set up to make us ill. It scares us, misleads us. It invents and inflates controversy, focusing on things that don't really matter, yet making them seem imminent and life-threatening. It shows us only what we want to see, or what we want to believe. Or what someone else wants us to see, the way they want it seen. And then it tells us to buy things, which is the ultimate goal of all of this manipulation. And we do, because the movies, podcasts, Doritos, and sneakers from Amazon, are the only things that soothe our frazzled nerves and distract us before the next media blitz.

Our whole American culture today has an epidemic of media sickness. It's worse than the obesity and opioid epidemics, because it's even infecting the halls of power and changing geopolitics. It crosses every class and infects every home. I think the only solution is to go cold turkey. Cut the social media, the TV, the news. Live your life free of influence. You might find that it's especially hard to do that, to live without the consumption, without the constant presence of advertiser-driven content. Even here on HN, we are subjected to the manipulation machine. It's so hard to get free of it.


> Our whole American culture today has an epidemic of media sickness.

It's everywhere, I often go to rural eastern europe, ~200 inhabitants town without municipal water type of beat, families stop meeting for christmas &co over political BS they read online, it's probably even worse because half of the topics are imported from the US and don't even exist locally


Lately I’ve come up with a heuristic that I think anyone can use:

Before reposting something, ask yourself “have I heard about this website before today (or before this news event/cycle)?” If you haven’t, google the thing you just read about, and see if you can find an article from a source you’ve heard of before today.

This isn’t to say that mainstream news doesn’t get things wrong, they definitely do sometimes, but so much of the real garbage I see on social media is people all sharing the same post from some random site called like newsline.xyz or something. In fact there’s so much low hanging fruit like that that I think a large amount of disinformation would go away if people just checked if anyone at all was reporting the story other than the source they’re reading.


Had a fun experience at work this week: tsunami warning goes off on the west coast while we are all in a remote meeting. I say that there has been a large earthquake off the coast. 30 seconds go by and coworker posts a link to a news article with the headline that "tsunami warning tests were being conducted that week" and goes on to tell everyone it is just a test. I open the article and immediately see that it is from 8 months ago. Its an easy mistake and I do it all the time as well, nevertheless it was interesting to see how quickly the wrong thing gets out.


Ironically this used to be one of the better uses of Twitter.


> Before reposting something, ask yourself “have I heard about this website before today (or before this news event/cycle)?”

Here's a better heuristic: Stop wasting your precious time and mind on low-grade crap. Read/watch high-value sources only - you won't run out; there are too many to read them all.


It's not only the source, it's also the kind of content. It's of course not 100% reliable, but most bad content is pretty lurid stuff. Wedge issue content of any type can be safely ignored as well, or rather it's unsafe not to ignore.


> most bad content is pretty lurid stuff

That is the badly executed bad content. Well-executed disinfo/misinfo is what gets you.

Believing we can tell the difference just by looking at qualities such as luridness is the critical error. It's like people who think they can spot fake reviews - they can spot the reviews that don't fool them, not the reviews that do fool them.


It's not a perfect filter, but I think it's useful.


Got some good aggregate tips?


I'm not sure what you mean by 'aggregate', but there are still serious journalistic organizations:

* NY Times: Ignore the politicization of their reputation, and the demonization. Ignore that it's the boring, status quo answer - it is for a reason. Just look at the articles, which describe their methods for that article: The research is extensive, statements are made carefully, and they offend everyone on all sides equally. Their claims tend to be born out in the long run (though nothing is perfect and certainly they've made mistakes).

* Financial Times: Very expensive, but very good if you can afford it.

* The Guardian: Certainly a liberal point of view, but quality research for their journalism.

With the recent corruption of the Washington Post and CNN, and MSNBC (though that was always biased), we are running out of top quality, credible news.


Let’s say this idea is brilliant. How you gonna get that in front of people and then persuade them to use it?


Part of the problem with that approach is that a lot of people are getting their dis/misinformation from very large sources. Infowars, Breitbart, Zerohedge, The Free Press, Turning Point USA, etc: the list is endless of total nonsense sources that tend to spread around the same polemical and heterodox disinformation. People will cycle through the same content shared by each of these sources as "proof" of their claim when behind the scenes it's all just covering a single person's errant Tweet.


This technique won’t stop those people, but then idk what will. I’m mostly talking about the kinds of things I get frustrated at seeing my liberal friends post.

Usually if I see one of them post something that’s total BS and I point out that it’s BS, they’ll apologize and take it down. But I don’t have the energy to constantly be “that guy” all the time. I think if more of my friends did this (since their reactions show that they are in fact interested in sharing things in good faith) then at the very least my own facebook timeline would have less junk in it.


When we single out publications, let's ensure we're not biased against the Right or the Left.

https://gatewayjr.org/wp-content/uploads/2019/12/Media-Bias-...


Despite the intent to show an equal distribution visually, if you actually pay attention to the individual outlets you see a few things. Nearly all the outlets in the summit block, over 40 reliability score, are considered left or far left by republicans. The most extreme fringe of low reliability score are all right wing. I’m honestly not sure how any critical thinker could believe both sides are equally bad at spreading misinformation. Both sides do it to an extent, but the right is clearly worse, and that’s what your chart shows. Believing that is not bias. It’s reality.


I would say that the chart somewhat limited based on the score that CNN is getting. Every single day for years if you were to turn on CNN at any time of the day you would see stories about Trump + Russia. Sure you can call that "analysis" but I think it under-represents how much of an effect the repeated beating people over the head with one-way "analysis" has.

You probably wouldn't call this misinformation, but I belive it is deception because CNN markets itself as fact-based "news" (https://www.cnncreativemarketing.com/project/cnn_factsfirst/), but in fact it's just entertainment disguised as news. And like Fox News, CNN tries to bait it's viewers into emotional reactions and team-choosing, which ends up causing more division between us.

Anyway, that's why I get all my news from the Weather Channel.


Yep was going to say something similar.

Probably best to stick to something like the BBC news for most things IMO - they're not funded by advertising or some billionaire trying to influence things, not politically aligned (some people will argue against that though), and do generally decent journalism.


Related:

Browsing negative content online makes mental health struggles worse: Study

https://news.ycombinator.com/item?id=42353944


Curation of sources is the key skill for dealing with this change. In areas where you have no expertise, it's helpful to observe their predictive power over time and use that as a proxy for competence.


On Twitter I’ve started to block anyone who posts outrage stuff, even if they’re “debunking” it. I couldn’t care less. I just want to not read it. It’s actually nicer for me.


That's not a good longterm solution on a platform which will continue to subject you to it - by design.


Whether it's a good solution or not is dependent on your effort/utility curves. And it's a social network with low switching cost: I don't need a long-term solution. One day I will stop using it and it will cost me nothing to do so. So it only matters that it works now.


I love to bash social media as much as anybody, but, how different is this from the hoaxes, crazes and hysterias that society went through before social media was ever a thing? Even if we were to eliminate social media, people would just figure out another way to spread viral memes. Humans are wired to rapidly spread ideas the same way they spread the cold virus, not much we can do about it unless we isolate ourselves from each other, or introduce some kind of a govt (or AI) reviewer in between us and every piece of information we receive.


I feel like the question answers itself. We know what the world looked like before widespread adoption of social media, and what it looks like now. Before: obvious bullshit struggled to gain any audience at all and fringe stuff was largely contained by the cost of stamps and labor. After: one rando successfully convinced untold hundreds of thousands (if not millions) of individuals globally that lizard people exist, probably from their phone, in their spare time.


You really don't need social media for that. The Rwandan genocide was triggered by one shitty radio station. People will always find a way to easily distribute propaganda.


Easily? To replicate that you'd need to at minimum hijack a radio station. Compare/contrast with the relative difficulty of fishing your phone out of your pocket. There is also the minor issue of instant global reach, so yeah, hijacking radio stations on several continents should keep one busy a while I'd imagine.


The major difference in my mind is the volume and speed. A constant fire-hose of it being beamed at you 24/7

I don't think it is hard to ignore the outrage and drama on social media if you want to, but do people really want to?


> I love to bash social media as much as anybody, but, how different is this from the hoaxes, crazes and hysterias that society went through before social media was ever a thing?

It's always the same answer when the question is "what the difference between X on internet vs X before internet"... the speed and scale of it. You wouldn't argue that a nuke is just like a handgun, well it's the same here


Uhh okay, but when I have a viral disease, I typically do try to isolate myself and generally am proactive about keeping friends, family, and strangers safe. I don't go to the busiest place I can find and then start sneezing everywhere.

The issue is that people no longer use any rigor to question whether something could possibly be false. I don't need a mechanism to tell me whether something is true or false, I am the mechanism. But I understand that is too much to ask from society in general.


You really don’t see the difference between the weird guy in the bar spouting off some nonsense about aliens and social media manipulating entire countries and their elections?


I do.

Here's a devil's advocate argument for social media: if you only leave it up to the government and certain approved organizations, or the extremely affluent and well-capitalized to have a megaphone, you're effectively handing the keys to the information rails to a very small subset of the population who then gets to dictate what the rest of us get to hear about. With social media you're democratizing the megaphone, with, of course, all of the negative side-effects of giving the average Joe that sort of power.

However, if you've ever lived under a regime where only a select few had that sort of power and they abused it to the extreme, you'd rather put up with pizza gate and alien conspiracy theories any day of the week. It's akin to an informational second amendment, with, of course, all of the controversy and philosophical differences that comes with. And of course you could debate the correct extent of these powers all day, similarly to how there's a difference between letting your citizens purchase a pepper spray can vs WMDs.


> With social media you're democratizing the megaphone, with, of course, all of the negative side-effects of giving the average Joe that sort of power.

That's an illusion.

Users see only what the underlying algorithm has decided will increase their engagement.

This behind-the-scenes curation of content all but eliminates this democratization you speak of.


I'm only responding to the content of the posted article, which doesn't speak of the website owner putting their thumb on the scales to advance a particular narrative. I'm happy to advocate against that.

But if the algorithm responds equally well to highly engaging posts from team A and team B, then ultimately the algorithm is fair, there's a level playing field. Would I prefer it if it boosted boring academic papers about nothingburgers that nobody will ever read? I suppose so, but clickbait is the content humanity wants to engage with, regardless of medium. It's up to each team to make sure that their comms are written in a way that cuts through the noise, like with any other form of marketing.


> But if the algorithm responds equally well to highly engaging posts from team A and team B, then ultimately the algorithm is fair, there's a level playing field

That doesn't work well for topics where one team, A say, is free to just make up stuff and the other team, B, is constrained by trying to stick to verifiable facts. It gives A a massive advantage.

First, it often takes time for B to develop a response to something new from A and by the time B's response comes out A has added several more posts. Even if B's post completely refutes that first A post, people will see that it hasn't addressed the additional A material and might think that means that additional material is fine.

Second, it is easier, cheaper, and faster to generate A material than to generate B material, so A can make it so there is always plenty of un-refuted A material. They can also tweak any A material that B has refuted and claim that it was just a minor issue and they fixed it.

The result is that readers see a lot more A than B material. Some might figure out from the B material that the people pushing A are not being truthful, but remember there are another dozen or more topics showing up in their feed each with its own A and B.

Most people aren't going to be able to successfully figure out which is being truthful in all of those topics, and so we end up with a lot of people getting misled and scammed.


I agree it's an issue, you're describing https://en.wikipedia.org/wiki/Brandolini%27s_law . How do we fix it though?


> But if the algorithm responds equally well to highly engaging posts from team A and team B, then ultimately the algorithm is fair, there's a level playing field.

If only individuals of Team A see content from Team A and only individuals of Team B see content from Team B, regardless how equally Teams A and B get promoted as a whole on the platform, it's not really democratized.

You merely have separate Team A and Team B platforms, except they're hosted on the same hardware and exposed via the same web address. The separation is entirely virtual and intentionally hidden from users.

It's little different to a Team A member than if all of Team B was shadow-banned or to a Team B member than if all of Team A was shadow-banned.


But isn't this how all media (or everyday life) is already when people are given a choice? People self-segregate into Bluesky vs Twitter. MSNBC vs Fox News. Vox vs Breitbart. Shia mosques vs Sunni mosques. California vs Florida. People really like retreating into their own flock of fellow believers and don't want to be exposed to information contradicting their beliefs. Social media isn't doing anything new here, it's just selling the product people want to buy.


> But if the algorithm responds equally well to highly engaging posts from team A and team B, then ultimately the algorithm is fair, there's a level playing field. Would I prefer it if it boosted boring academic papers about nothingburgers that nobody will ever read? I suppose so, but clickbait is the content humanity wants to engage with, regardless of medium. It's up to each team to make sure that their comms are written in a way that cuts through the noise, like with any other form of marketing.

That assumes it's possible to have a "fair" algorithm.

As I don't want to make a point about specific parties, call them Team Purple and Team Mauve.

If Purple voters tend to press "share" and Mauve voters tend to comment with a link back to the original, and the algorithm favours shares, should Mauve adjust their comms strategy, or should the maker of the algorithm adjust the algorithm? And what happens if, even with the best will in the world, both update at the same time and therefore Mauve gets more and more out of sync?

But even then, marketing is a game of "who has the most money" — one dollar one vote, rather than one person one vote.


So what do we do if it's impossible to have a perfectly fair system?


Make sure that processes involving the development of the system — in this case the social media algorithm — are done publicly, with the kind of democratic involvement you would expect for actual legislation.


> With social media you’re democratizing the megaphone

No, you aren’t. The megaphone is controlled by the owner of the social media network, not the user.


The biggest problem are state actors. They have infinite resources and have completely different motives from most normal users.

I follow football (soccer) news, and it's refreshing and interesting to see news that is not contaminated by state actor bots. There is definitely bias and fake news, but fake news hardly ever spreads far and lingers long.


We knew this at Facebook a decade ago…. and goaled everyone on shares and engagement.


I'm glad this is something we're starting to figure out. The more something you read or hear makes you immediately mad, the less you should share or repeat it without additional verification. Our instinct is of course the opposite, which is why some are looking at Free Speech as a hindrance when attempting to address bad faith reporting and demagoguery.

But our instinct when someone else has more food than us is to go bash them over the head with a rock and take it... however, we've spent great effort to raise all children not to do that, for obvious reasons. So the answer has to be that we preserve Free Speech, knowing that misinformation and demagogues will exist, but broadly train our critical thinking to override those instincts to the end that we can effectively divide truth from lies.

We've had millennia to suppress our aggressive tendencies to the point where civilization is possible. But the retweet button is, like, 20 years old? We'll get there, but probably on generational timescales.


Social media should take a page from LLM applications and warn humans that humans can make mistakes and hallucinate etc.

But Oh no! our precious unique snowflake opinions? but yeah, everybody has one


Confirmation bias alert:

About 8 years ago, I deleted all social media precisely because I intuitively felt this way:

>“The misinformation ecosystem is not just driven by user behavior, it’s also driven by algorithms,” he says. “When you engage with misinformation—even in disagreement—you’re actually contributing to the increase of misinformation in the ecosystem because you’re letting the algorithm know that it’s drawing engagement.”


Yes there's a problem and it's bad. The question is: how to fix it?

More specificaly, who is to decide which information is "correct"? Do we need a "Ministry of Truth"?

China does tackle this problem with a government branch called "Cyber Administration Office".

Compared to this "100% authentic information", I'd rather to endure misinformation as noise, thanks.


Also, Water Wet.


This goes deeper than you think. It is an inevitable product of profit-driven lTe stage capitalism

If you want to know the solution, and not just lament the problem, here it is:

https://www.laweekly.com/restoring-healthy-communities/

Also building things like https://rational.app


Just one more app and everything's alright. One of these days now...


We cannot have accurate information in a highly unequal, unjust society. Inequality requires a lot of misinformation to uphold; such amount of misinformation coming out of institutions will necessitate that people craft potent counter-narratives to fight back for their survival.

Institutions cannot keep taking wealth and expect people to top themselves one by one as they run out of opportunities and resources. Some people will top themselves, but most will fight back. We should all be grateful that they do it with words.

If things keep going the way they have, fast forward 10 to 20 years and people will look back at today as a wonderful, peaceful time that it is (at least it should be for some people). A lot of people don't realize how good they have it right now. Their desire to uphold a deeply flawed system which they benefited from (past tense), without any reforms, comes across as extreme greed.

If you see people with pitchforks outside and think the solution is to ban pitchforks, then your worldview is flawed.


I think inequality just stems from human nature. People are unconsciously selfish, and life is unfair. We have not been forced into it by misinformation - we are, by nature, looking out for our own selfish self-interest. Sure we can and do help people out, but the default from nature is survival of the fittest, and some people just get delt a shit hand in life (where in the world they were born, luck, events outside of their control etc) It has been like this for millenia - not just since mass media has been a thing.

Removing misinformation will not suddenly lead to universal global equality, it will not "break the spell". Life will still suck for a lot of people in the world if we turned Twitter off tomorrow.


I'm a lot less worried about misinformation than I am about whether we're letting reasonable, educated or informed people speak up.

One of the interesting parts of COVID was it seemed like anyone with a PhD and a YouTube channel was contorting to try and avoid ... I want to call it something less inflammatory than a censorship regime but I'm not sure what else to call it. They were often making points that were technically correct and held up reasonably well in hindsight about possible origins of the disease, likely efficacy of different strategies and going through scientific studies in real time.

I never found out what the qualifications of the YouTube Trust & Safety team had. I doubt they were mostly PhD holders.

I don't mind YouTubers being wrong from time to time, but the systemic suppression of complaints based on some central office's opinion on whether they are legitimate can't work. It didn't seem to work in practice either.


I see why you might be concerned, but your freedom of speech does not include a right to force others to repeat it for you.

YouTube is a business. If they don't want to use their platform to spread your ideas the government shouldn't force them.

Besides, YouTube is not a reputable peer reviewed journal. Actual science was not hindered.


Also peer review in its current form was a postwar invention, meaning that for the majority its history it was not regularly employed, let alone considered to be a line of demarcation between science and non-science.

Having been on both sides of the peer review process many times, I can assure you that it does not pick out all and only “actual science”


It’s often unclear whether the decision to remove politically unfavorable content is a pure business decision or the result of informal lobbying of such companies by the government. The White House has often made requests about the removal of material about certain stories, including e.g. the Hunter Biden laptop affair, and it’s rarely clear whether these requests are entertained due to general political sympathy, the threat of unfavorable regulation, or what. If the government directly censors citizens’ speech that’s a 1st Amendment concern, but if they strongly insinuate that a company will face a hostile legal climate unless it censors a citizen’s speech, that is afaict not illegal.


Do you mean the Biden campaign rather than the White House? Twitter was fielding requests from the Trump White House at the same time since Trump was president then.

And of course we know this, and you probably got your slightly skewed take, from a supporter and member of the incoming President's team now owning Twitter.


> Do you mean the Biden campaign rather than the White House? Twitter was fielding requests from the Trump White House at the same time since Trump was president then.

lambdaphagy said "the White House". You seem to be arguing that it wasn't one specific occupant of the White House but more a property of the office.

It seems a bit weird to question whether lambdaphagy meant what they said when you seem to be trying to argue that what was said is literally correct. It is one of the most disagreeable supportive comments I've ever seen. Am I reading it right or have I misinterpreted you?


Let's all go yelling FIRE in crowded rooms then.


That's legal in the US if someone actually believes there's a fire.


Legal and moral are not synonyms.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: