Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Repealing Section 230 won’t do what anyone wants (superwuster.medium.com)
170 points by spzx on March 14, 2021 | hide | past | favorite | 244 comments


I understand the issue a bit differently. If a user writes some content that stays as a comment somewhere and it is defamatory, then the user is responsible. So far, so good. This is what 230 used to address.

But the real issue at play is that an algorithm decides that this content will increase engagement within the platform. And it actually does. Now, I think the company behind this algorithm is responsible for the defamation. For example Facebook uses the current law to hide and so all its fight with disinformation is subject to keeping the same engagement levels, which is obviously a very hard task. (We tried hard, but the problem is too difficult).

While the first case is good to address, the second one has severally outgrown it. Thus, so many people are even willing to give up something in order to address it.


> While the first case is good to address, the second one has severally outgrown it.

The problem with all of the repeal proposals is that none of them actually address the problem.

Suppose you're a search engine. One of the search hits is making a factual claim. The search engine naturally has no way to validate the claim. That would require an investigation, giving the accused an opportunity to defend their claim and then having a neutral arbiter make a decision. This essentially describes a trial in court.

We already have civil litigation. If you prove a defamation claim, you can have the court order it to be removed by the host or search engine.

Anything Congress could do about this would be an attempt to end run around the First Amendment, i.e. de facto require private companies to operate a shadow court system and then pretend the First Amendment doesn't apply even though it's being instituted through legislation.

It should be obvious why we don't want this -- if the government is imposing penalties only for not censoring things then you're going to get one-sided suppression of dissent, but if you punish them for over-censoring too then you're just cloning the actual court system and might as well use the real one.

The root of the problem here isn't Section 230. It's that these platforms are too large, so that any mistake they make gets amplified across a billion people instead of a million. Break them up.


I'm not sure if it's just algorithms. Twitter banned accounts for discussing why we needed to protest for fair elections because they "incite violence", yet Khamenei's tweet of calling for "eradicating" a people and a nation has been there since 2017.

Twitter banned accounts for challenging the effectiveness of wear masks because they are "anti-science", yet it didn't do a thing when people were challenging CNN/WHO/CDC's messages that wearing masks was unnecessary and would cause public panic in March 2020.

I'd support repealing of section 230 jsut for this level of double standards.


How would repealing 230 address your perceived double standard? Platforms have the right to make editorial decisions about what appears on their platform. Without 230, they’d probably double down on it. Repealing 230 would not somehow provide an Avenue to force them to provide “fair and balanced” coverage.

Maybe you just want 230 repealed so the networks can be burned to the ground through a mass of frivolous lawsuits. If so ... no?


I don’t mind bias or stricter rules. Left or right, progressive or conservative, all fair games. I do mind hypocrisy, as hypocrisy is the result of setting narratives regardless of facts or principles. And yes, I don’t mind if twitter goes down.


How does 230 address hypocrisy? Or are you just airing a personal grudge?


I think the point of the post is that repealing section 230 would not help with “fairness” on the platforms. The arguments in the post are pretty strong imho.


I think you're seeing this case from the article:

> So when you see Trump or other conservatives calling for a Section 230 repeal, sometimes it is just an effort to inflict pain to try and get the platforms to do what they want.


And what do they want? Fairness...

IE: Violent posts from SJWs attacking white people and supporting violence in BLM/AntiFA "protests"? Many MANY still standing and complete silence from the platforms during YEARS of violence, hate and instigation...

"violent posts" from conservatives supporting thousands of examples of voter fraud and calling for peaceful marches - just like Trump called for - while questioning the "truth" about mask mandates that have changed on political whims (Fauci: for masks, against masks, for masks, against masks, etc)? THOSE have to be taken down with extreme prejudice...

The double standards the platforms are pushing is so extreme that "peaceful scholars" are honored... while American Presidents are banned...

You can, for four years, say "russia hacked our elections" with the evidence being a single foreigner paid by the DNC for spy information. But you can't say "democrats hacked our elections" with thousands of Americans presenting evidence.

The article is true that no one is going to be happy... but that won't change while the platforms are the massive hypocrites that everyone in the world can see on full display as they push 1984's "Truth".


If you're going to make a point about the hypocrisy of demanding evidence, your example probably shouldn't include such an obviously incorrect statement as the belief that the Steele Dossier was the only evidence produced regarding Russian's interference in the 2016 presidential campaign.


The fact that most of society thinks it's much worse to say mean things about black people than to say mean things about white people is not Twitter's fault.


Does Twitter allow it because "most" of society believes it to be true... or does "most" of our perception of society believe it because Twitter allows it?

True or not, we mustn't let a potentially minority-held opinion overtake the public sphere because the next generation will most certainly believe it because that's all they'll know. That is why absolute free debate is crucial and feelings should have no part in it.


The world doesn't think it's worse... the world thinks its hypocrisy to ban for one and not the other.

I'm against both... you defend one.


> thousands of examples of voter fraud

This is not true.

> with the evidence being a single foreigner paid by the DNC for spy information.

This is not true



Honestly, how can you believe trump's pr person waving a stack of paper on hannity but no other reporters got to see the supposed evidence. It's just laughable. Especially since they had nothing when it came time to show it to the courts .


Because Trumps been shown true in his "lies" more often than people on the left want to admit? It's laughable how often his "lies" are simply opinions others disagree with... or are actually true - even if it takes months or years for the truth to come out.

Example: My campaign was spied upon.

Even to this day, the left is caught up in lies... "find the fraud" for example was just proven an out right lie.

Honestly, how can you believe the left as they scream LIAR! when they support Biden/Harris? Biden can't speak without lying and Harris couldn't even win her own state because she's laughable?

Trumps got problems galore... but he's still 100x better than anything on the left - which is why he won in 2016... and why they had to cheat in 2020.


The problem is that without section 230, it becomes impossible for any website to have user hosted content. If websites are responsible for shit their users say, no website that posts user content can exist. No one is going to risk being sued over something a user says.


Fine. Websites aren't responsible for the content. But make them responsible for promoting or curating the content, if they choose to do so.

Instagram just shows me all the stuff my friends post. That's it, as far as I can tell. It's great. Facebook amplifies some things and doesn't show me others. If it wants to be in that game, of distorting informational impact, it needs to take responsibility for what it's amplifying.


> Instagram just shows me all the stuff my friends post. That's it, as far as I can tell.

Unfortunately, Instagram is not just "showing you everything your friends post".


I’m kind of curious as to what the person is seeing, because every third post I see is an ad placed by Instagram, and for some of the meme pages you reach nearly that much saturation with sponsored posts.


I mean obviously there are ads, but if my friends post something, it's at the top of my feed, and I can scroll through only content that my friends post (and ads).


It's still not just a linear feed, though. Years ago it was just a reverse-chronological feed of things your friends post. Now the ordering is "algorithmic". I can't think of any good reason to reorder posts other than to try to increase engagement, which is the problem we're talking about here.


If you don't allow curation, that's the end of moderated forums like Hacker News, isn't it?


Who said anything about not allowing curation?

> make them responsible for promoting or curating the content, if they choose to do so


So, Y Combinator can choose to be liable for anything defamatory that manages to get onto the front page (there is, after all, an algorithm), or anything defamatory that they fail to delete out of the comment section. It doesn't seem sustainable.


Which ends up affecting sites like Github and support forums. The moment Section 230 was repealed you'd see thousands of frivolous lawsuits against any company with deep pockets.


Or maybe centralized services like GitHub should not exist?

And internet should be back to BBS days where public content are moderated by volunteers in each community separately?

Now I think section 230 is not a necessity for an open internet. It just changed how internet works by making centralized business easier.


What is the material difference between a web site and a BBS?

BBS operators would be equally responsible, and won't run BBSes. Same with chat rooms and IRC channels — their operators could be sued, so they won't run anything like that for public service.

Of course, a web forum could still be run over Tor, hiding the webmaster and the participants. There would be no one to sue, so whatever inflammatory content were on such a forum, it would stay unless the moderators cared.

If the point is to stifle the public discussion and push it underground, then removing the protection of operators from liability for UGC is the way to go.


I don't find it unreasonable to ask website / BBS owner to be responsible for content on their website.

I don't remember BBSs constantly got sued in the old days.


A BBS in the days of yore didn't have deep pockets for anyone to go after. They did however get sued or taken down for copyright issues and other content. There's the infamous case of the Secret Service raiding the offices of Steve Jackson Games [0] over content that was in an upcoming book for the GURPS role playing game [1].

[0] https://www.eff.org/cases/steve-jackson-games-v-secret-servi...

[1] http://www.sjgames.com/SS/


Open source software has been massively buoyed by the availability of platforms like Github and before them SourceForge. Before these platforms OSS maintainers had to manage their own version control, packaging/distribution, and issue tracking. Without those things in place (managed or self hosted) it's pretty hard to collaborate with others. Big projects could afford to do it (both money and effort) but most hobbyist projects would just dump a tarball on their university or ISP provided web space.

Services doing that laborious work for no cost has let OSS authors more easily collaborate and just get work done on projects. They've also enabled small projects to just exist since the author doesn't need to even know about the infrastructure needed to host their code.

You can, and many do, self host Github equivalents. It's not like Github used all the oxygen in the room and monopolized source control.

The same is true for web fora. You can go host your own forum/fediverse site right now pretty cheaply. Domain names are cheap and TLS certs are free. There's nothing stopping you or anyone else from doing that. Plenty of people already are doing so.

Centralized platforms come into being because of network effects. You can host your own forum (or whatever) but that doesn't mean people will come join it. Lots of groups formerly served by on-topic forums moved to Facebook groups because all the participants were already there. It's no-cost vs low-cost and all of the infrastructure is managed by Someone Else. Infrastructure maintenance is a pretty thankless task.

Starting a new group on Facebook (or wherever) is pretty frictionless if all the participants are already on Facebook. There's a lot more friction starting a new little island of discussion with a forum.

By wanting to go back to the "BBS days" you're wanting network effects to not be a thing that exist. You're also somehow expecting people to have the technical chops to run a site. In the "BBS days" only a minority of a minority of people even had the modems to host or call a BBS. Just by the nature of the home computer market those people would be more technically adept than the average person.

My mom, a non-technical user, can join a Facebook group very easily. She's not going to seek out let alone join some forum even if it's dedicated to the same subject as the Facebook group. She's also not going to run her own forum to talk about some topic where she can simply and easily start a Facebook group.

People seem to forget that in the "good old days" of the early web it was mostly the technically adept building and browsing sites. In terms of conversations had or bytes transferred the vast majority was on closed platforms like AOL and CompuServe. Even in the "BBS days" (the latter era) Prodigy, CompuServe, and AOL were far more popular than BBSes. Even with a BBS being "free" online services had a national reach and just far more resources available. Unless you had a big multi-line BBS in your area dialing into a board could be a crap shoot.

The olden days were not necessarily better than today despite nostalgia and fetishization. Some stuff today is not better than things in the olden days. I'm not saying I like or support Facebook or Twitter or that centralization is unalloyed good. Centralization doesn't just happen in a vacuum and for no reason. Usability is very important as well.


Your assumption of current social media is the only way for mass to run content is not necessarily true. One alternative I would imagine is hosting companies to develop one-click solutions for grandmas using cellphone to create their own websites / blog under their own domains. They can choose whether to allow strangers post comments on their websites.

Of course there's more possible alternatives.


The good old “frivolous lawsuit” trope, the classic Republican boogieman since the days of malpractice “reform.” What’s wrong with people seeking justice through the courts? Lawyers need to feed their kids too. They do the job others won’t do—ambulances don’t chase themselves.


Frivolous lawsuits are a literal denial of service attack. It costs money to defend yourself in a lawsuit. In the US there's no default loser pays system.

If I sue you and lose, you're out the cost of your defense. You'd have to sue me for your expenses. While an attorney might take the suit on contingency you're guaranteed to win and unlikely to recoup all of your original expenses even if you do.

Frivolous lawsuits are not in any was a boogeyman. One need look no further than bullshit DMCA takedowns and bullshit patent suits filed in East Texas to see the model for frivolous lawsuits with a Section 230 repeal.


> Frivolous lawsuits are not in any was a boogeyman.

They certainly were in the 90's and 2000's. This lawsuit[1] became a talking point against "frivolous lawsuits", and ammo in a PR war for tort reform[2] in the US.

[1] https://en.wikipedia.org/wiki/Liebeck_v._McDonald%27s_Restau...

[2] https://en.wikipedia.org/wiki/Tort_reform#Frivolous_lawsuits


I find it quite odd that [1] is considered frivolous, given that the plaintiff required emergency surgery, left the hospital weighing 38kg and ended up partially disabled.


I agree with your point in general, but I'd also argue that there are companies that weaponize copyright and patent law in order to intimidate individuals and companies without the means to go to trial, even if they don't have a real legal standing to do so. GitHub seems like a great place for patent trolls to find their marks.


> No one is going to risk being sued over something a user says.

No small company would, but some multi-billion dollar companies might. They sure would censor the hell out of any user content, though, or else they'd run the risk of being raided in the middle of the night by the FBI if a user found it funny to upload something illegal to their servers.

Which brings us to the OP's argument, that repealing Section 230 wouldn't do what the repeal proponents want. The end result of repealing Section 230 of the CDA is that total online censorship becomes the norm, and only giant companies would be able to benefit from the limited user-generated content that's allowed to exist after the repeal.


It's possible, but your scale is severely limited as you have to actively moderate your site. You also have to build a culture where users value their position in the community and fear losing access to it for doing the wrong thing.

Websites/forums exist in countries without an s230 equivalent.


> The problem is that without section 230, it becomes impossible for any website to have user hosted content. If websites are responsible for shit their users say, no website that posts user content can exist.

That has nothing to do with section 230. Websites weren't responsible for user-generated content before section 230 and they still wouldn't be if it went away.

What section 230 does is extend the same immunity that content-blind hosts have always had to hosts that modify the content they get from users.

This comment upthread:

> If a user writes some content that stays as a comment somewhere and it is defamatory, then the user is responsible. So far, so good. This is what 230 used to address.

is wrong. Section 230 didn't address that case; the user was always the responsible party. To see section 230 at work, you need a more complicated setup:

1. User A posts defamatory content to a web forum.

2. User B posts defamatory content to the same forum.

3. The forum operators see User B's post, are outraged, and take it down.

This is where section 230 makes a difference. It says that, even though the forum takes control of user-generated content for the purpose of expressing its own views, it still isn't liable for content that it hasn't directly touched. In the absence of section 230, the forum would bear liability for user A's defamatory post as soon as they took down user B's post.

There's no problem hosting user-generated content without section 230. You only have a problem if you editorialize on top of that content.


I prefer my jacket News with moderation. Section 230 is what allows sites to moderate. Without it you have to have either no moderation, or perfect moderation.


I think there's a good case to be made for amending 230.

230 was written for a pre-algorithmic feed age where user content was displayed in a fixed context setup by the provider. The provider chose the frame, the user the painting.

Now users send content to a machine that is constantly making new frames to better match the pictures it is given. The picture is still the picture - still made by the user, but the site bears more responsibility for its presentation because its 'framing' is based on the contents / reaction to the picture. However, I do not think a total transformation has occurred. Choosing how to display the picture doesn't change its contents.

Writing a law that sensibly engages with the huge diversity of algorithmic strategies would be difficult even for skilled legislators (if only we had some!). Maybe the answer is in formalizing shadowbanning in some way?


In your view does this apply to all algorithms or just ones that discriminate based on the content itself (ie use content as an input).

If the algorithm simply prioritizes say the most upvoted comment, is that the same? Or how about prioritizes the most watched/liked video?

I ask this because it’s basically impossible to not prioritize content unless a website is a mere directory of user information with no search box. Where do you think the distinction lies between being responsible for user engagement and not?


Any promotion of content without the user explicitly asking for it should be considered as an endorsement.

People can search for things they want to follow if they want too. It is crazy, logging back into Facebook, how far it has gone away from the chronological feed of your friends that it started off as.


> If the algorithm simply prioritizes say the most upvoted comment, is that the same? Or how about prioritizes the most watched/liked video?

This used to be handled by the user clicking a button that said "sort by most votes" or the top of a column called "votes." Now it's a mystery how to handle that no one could possibly know how to do.


I understand your position but I'm curious.

There's more depth to the subject.

any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;

That's section 230.

Yes if it's defamatory or libel. Then section 230 is explicitly allowing the platforms to censor.

The 2 sides of the problem come when it's speech that isnt listed right there. It can be offensive, unpopular, or controversial but section 230 does not allow the platforms to censor those subjects. Section 230 outright says constitutionally protected.

So the main problem is that platforms have illegally broken good Samaritan protections and have effectively removed their section 230 protection on their own.

However, that's problem #2. Section 230 never made allowances for punishment or lawsuits. If Twitter lost their section 230 protection, they stop existing the next day. That's the only punishment but that would be tremendous wealth destruction.


This implies the distribution of false information is the problem. What culpability do you place on the producer and consumer?


Very concisely put.


There is a point here that transcends what the left and right think will happen: the tech giants are making editorial decisions that are reasonably opinionated.

Whether or whether not it does what people want, it needs to be recognised that they are publishers, not dumb platforms. This is an acknowledgement of reality.


Also a similar point, Congress keeps summoning them and putting pressure on them so it’s almost a damned if you do damned if you don’t scenario. I am greatly opposed to censorship (other than say content that harms children or is fraudulent) and a platform having a stance on what is “truth” or “disinformation” is quite terrifying, but I also get that it’s an unfortunate side effect of the risk aversion song and dance they are being forced to put on.

To be clear about freedom of information in general, I think competing theories on things like coronavirus and election fraud should be entirely tolerated, especially when the only apparent consensus comes from blindly “trusting the officials” who are just other human beings. It’s very ivory tower and dystopian of them to want to push a narrative of stability when really there’s so much we don’t know.

The idea that we aren’t adults who should be able to question and audit things and have free agency seems like it’s not even on the table anymore which is sad. We’re all walking our liberties into the grave just to avoid uncomfortable topics.

Like Cypher in the Matrix, ignorance is bliss, plug my body back in!


yes, but that's the easy-to-recognize part. moreover, we all want a line (more like a hyperplanar function, really) to be drawn somewhere, but the hard part is getting everyone to compromise and accept any given imperfectly-drawn line.

the core concept of section 230 is that an expressed idea in any form has an author and it's inherently implied that the author endorses that idea by expressing it, but no one else in the chain delivering/amplifying that idea to others necessarily endorses it. the author, then, has liability but the intermediates don't.

where it falls apart is dealing with "exceptions". these exceptions are really separate rules about what kinds of content are so egregious as to apply liability to the intermediates in the chain, piercing their liability shield. there's also contention around the plausible deniability provided by section 230 that allows the platforms to express ideas through others (e.g., editorial decisions).

you could reasonably argue that an editorial decision is expressing an idea (perhaps different from the curated content's ideas) in itself, and therefore applies liability to the intermediate itself for that editorial decision (e.g., racism as a emergent property of a platform).

these kinds of granularities are why i'm partial to the idea of writing regulations as an intent statement (like the above core concept), a descriptive, not prescriptive, elaboration of that intent statement and its major facets, and then numerous examples that give shape to the hyperplanar function without trying to delineate it exactly (and inevitably imperfectly; that is, accepting that it's necessarily imperfect). the populace and (if need be) judges can competently interpolate/extrapolate between the given intent, description, and examples (this takes away power from lawyers as mediators of law, which is a reason it's resisted).


> As this suggests, what the left and right really care about are the content moderation policies of Facebook, Twitter, and so on. And those, as it stands, have little to do with Section 230.

> But content moderation, as an exercise of editorial discretion, is protected by the First Amendment. And that Congress can’t repeal.

[1] https://caselaw.findlaw.com/us-supreme-court/418/241.html


If the speech on Twitter is Twitter’s speech, then if someone defames me on Twitter, I should be able to sue Twitter. Just like if a reporter in the New York Times defames someone, you sue the paper, not the reporter.

If Twitter doesn’t have liability for the speech, then it belongs to the speaker and Twitter should have no obligation to censor it. They can’t have it both ways.

Likewise Twitter and Facebook should be paying their users for providing content that they profit off of, like newspapers pay reporters. Then they can have standing to choose what they want to publish.


>Likewise Twitter and Facebook should be paying their users for providing content that they profit off of, like newspapers pay reporters. Then they can have standing to choose what they want to publish.

What? This sounds like you're arguing that Twitter and Facebook are legally obligated to publish everything a user posts because they don't pay their users?


They are because there’s been an historical division between publishers and carriers.

230 is about that, it lets intermediaries create policies, without having to choose between being publishers or carriers.

Disclaimer: I don’t like this side effect too


There has been a division because historically publication was expensive and labor intensive not because of some inherent virtue of the arrangements.

Some people who don't like the moderation policies have created a fictional world where such policies are wrong instead of merely disagreeable like someone who hates pineapple imagining that putting them on pizza violates some moral principal instead of merely their own tastes.

If you examine such a request the user nearly always desires not broader liability for websites but for the government to use such liability as a cudgle to force websites to accept speech they would otherwise object to and host it on their platform. Effectively they want to take away others freedom and compell their speech. This is so obviously immoral that one wonders what moral principles could possibly justify this.


What changed since 230 was enacted is simple from my pov ... There are few platforms.

Without those few enormous platforms shaping the public discourse, 230 is largely good and unproblematic.

Which is why I’d love to have more platforms, while retaining 230.

But I also need to acknowledge that’s not where we are heading to.


What does the number of platforms have to do with anything?


Extreme and fringe ideas would naturally move to unpopular platforms, and die over time.

With too few platforms some are arguing they feel censored, banned from participating in public discourse.


> With too few platforms some are arguing they feel censored, banned from participating in public discourse.

The people making these arguments aren't doing so in good faith. They're playing the victim for extra attention. They claim they're banned from platforms because of their "conservative" politics when in fact it was them directly advocating for direct and explicit violence against political opponents that got them banned.

There's nothing "conservative" about advocating murder for your political opposition. That's not a necessary part of a conservative ethos. It's also not any sort of political discussion. If someone openly advocates for your murder you can't meet them half way.

When platforms get in trouble they make the same bad faith arguments. Parler whined claiming AWS dropped them for being a "conservative" platform while it was clear Amazon dropped them for not taking any meaningful steps to shut down open and explicit calls for violence.


You are describing an important and functional element of social networks going back thousands of years. Substantially new and different ideas most of which are bad and stupid face a trial by fire and over time society adopts the survivors.


Any platform has always had the freedom to shape its reputation by choosing what to publish/amplify and how, within the law, regardless of what it pays the speakers (TV stations don't pay interviewees, for example). 230 has absolutely nothing at all to do with that.


230 protects moderation of published content.

I hate the framing of the practical issues stemming from this in a matter of bipartisan policy.

I’d suggests people are realizing giant Internet companies are chokepoints on the flow of information, and don’t like how this is being handled.


You are mistaken about the rights and responsibilites of carriers. Carriers have never been required to carry all content without distinction. Just think about your cell phone company - if I get a call from someone they thing is a phone scammer, their name pops up in my caller ID as “scam likely” - the phone company is doing that!

The biggest flaw in thinking here is that carriers are forbidden from curating the content that they carry.


230 isn't about that. It doesn't regulate Internet sites as common carriers. You're drawing a false distinction.


I'm trying to understand what you're trying to fix with these declarations. What is the consequence of Twitter/Facebook not being responsible for content that appears on their site?

Maybe you should be able to sue the NYT reporter. I think you can, in fact!

I just don't understand what we're trying to accomplish with the changes to how things are currently. Maybe it would help if you described the world you'd rather live in, and how it differs from this world.


> What is the consequence of Twitter/Facebook not being responsible for content that appears on their site?

Well it’s what we have now. Mass misinformation and lies constantly being spread around. The trade-off we made was that it was supposed to be an unfiltered cesspool because it wasn’t feasible to censor views to match the editorial decisions of the company.

However, AI has made if feasible to censor at scale and the companies want it both ways. They want editorial control and they still don’t want any responsibility for defamation.

> Maybe you should be able to sue the NYT reporter. I think you can, in fact!

That’s worse than being able to sue the NYT. It allows the NYT to hide behind pawns they would love to sacrifice in the name of spreading convenient lies.


AI has not made it feasible to censor at scale, that is not correct.

And as the article makes clear, the idea that Facebook is not responsible for the content its users create has nothing to do with the problem of misinformation. To solve one would not even touch the other.

The suggestions I replied to seem focused on holding Twitter and Facebook accountable for what is displayed on their own website, regardless of provenance. It's not clear to me how that would enable anyone to safely create content that Facebook doesn't believe should be published on its platform.

Without the separation of the creator of the content and the organization displaying that content, Facebook would grow more strict, not less.

Fundamentally, you're trying to involve the government in deciding what "truth" is. That seems much worse than misinformation, yes?


> Fundamentally, you're trying to involve the government in deciding what "truth" is. That seems much worse than misinformation, yes?

We already have the courts involved in deciding what "truth" is. Has worked out pretty well in general, and certainly better than having private companies do it.


Misinformation is a flood by a million raindrops, and as courts are severely overtaxed, I shudder in asking for a magnitude more volume in a bunch of low severity cases.

People will debate more or less police funding, but massively more judicial funding is on nobody’s radar.

What the modern age calls for is the ability to easily enjoin a class of defendants and sue them all for $5, with adjudication taking 1 minute. Now your mom just lost $5 because she retweeted a MLM health scare scam 2 minutes ago.


I don't think this is a good solution at all. If mom believed the scam at the time she retweeted it there's absolutely no crime there. Being wrong or deceived is not a crime. Frankly your proposing punishing the victim.

Any actual solution would have to target the creators of the disinformation, and maybe those who knowingly spread it. Even that is hard in a freedom of speech context though. Even lies are free speech.


Saying false things which hurts other people is tortious, not criminal. That's why we don't need to talk about mens rea, or whether or not metaphoral mom meant to be false or hurtful.

When we're talking about Section 230, we're talking about torts, and when we're talking lawsuits, we're also talking torts.


Sure, I take your point that I'm thinking about it the wrong way legally.

But I still maintain the idea is bass-ackwards morally and legally problematic.

If mum believes the false Facebook meme she's a victim. Yes passing it on passes on the harm, but honestly I also think that doing detailed research into every meme before passing it on is an unrealistic expectation. Almost a decade ago I passed on the meme about Mr. Rogers being a sniper with x number of confirmed kills and always wearing sweaters to cover his sleeve tats. Exactly how much money should that cost me? In many way these memes are the natural evolution of "old wives tales" that have existed for centuries. Probably all of human existence.

And from a tort perspective I'm still not sure this applies. Part of the harm from tort comes from the fact it's repeated. If I tell people X bank is financially insolvent, it catches on and there's a run on the bank I'm certainly guilty of something, but I don't think the people who passed it on in good faith are. Rather the fact people were passing it around is evidence of the tort, not additional torts themselves (but IANAL of course).

And even if passing on a meme is a valid micro-tort in this scenario, now my mom has been materially harmed by whoever shared it with her. Does she now sue them in her own micro-tort lawsuit and the whole thing bubbles up like some kind of legal reverse ponzi scheme? It seems like all of these micro-suits floating around and sure to create the exact problem the original comment is trying to avoid.


But your mom shouldn’t pass on memes that make statements like that then if she can’t verify their authenticity. She is culpable in spreading it because it’s a deliberate action she took. Don’t spread information you don’t know to be true.

Your “mom” in this context is actively spreading fear about topics and not knowing something is true for certain is all the more reason she shouldn’t be doing it.

We have the Internet now, we don’t need rumor mills and information spreading second hand.


> But your mom shouldn’t pass on memes that make statements like that then if she can’t verify their authenticity. She is culpable in spreading it because it’s a deliberate action she took. Don’t spread information you don’t know to be true.

Are you sure you never spread information that isn't true? Really, really sure? I'd say with high confidence you have false beliefs you unknowingly pass on. I know I have in the past and assume I still do.

People continue to pass on the whole "frog in a slowly boiling pot" anecdote over and over again even though there is no truth to it. It's simple common knowledge no one has thought to question. Should every person who does so from here on now be fined $5? How is this different than someone who is taken in by a meme shared by someone they trust?


Not to this extent, we do not. This would be a whole additional level, multiple levels even, more subjective.

Honestly, it feels like a moot point anyway. The 1st Amendment makes all of this pointless to discuss. It will never happen, short of a literal collapse of the US Government and a reformation under a new Constitution that doesn't include the 1st Amendment.

The government will never be involved in deciding what people can and can't say to the extent that this would require. It's antithetical to our current legal system (not to mention our cultural mores).

Of course Twitter gets to decide what to publish on Twitter. There's literally no other way to operate, regardless of Section 230.


> It allows the NYT to hide behind pawns they would love to sacrifice in the name of spreading convenient lies.

And when it comes to Twitter or other sites, how do you sue one of its users who posts libel and defamation about your character, when said user is hiding behind a VPN anyway?

Twitter is hosting the content, and chooses not to take it down, so if that content breaks actual laws (libel, cyberstalking, etc), they should be held responsible for it.

Whereas if it falls within the purview of free speech, then they should have nothing to worry about.

I realize it's not a popular sentiment here because we want to build platforms and not worry about the legality, but giving websites blanket immunity to host law-breaking content because "it was posted by someone else" means that all of our laws become unenforcable on the internet.


Twitter does take down content that breaks actual laws when ordered to so by a court. So the state you're asking for already exists.


I can't say I've even heard of a court order for Twitter to remove a tweet before.

Is that really practical, though? To spend thousands of dollars on legal fees to take down a single tweet from an anonymous account that will just repost it again and again? Meanwhile every time Twitter is completely immune to any consequences for hosting and distributing said content?

It's a sucky situation. A service like Twitter can't really function if they're responsible for the content on the site, but all our existing laws are effectively unenforceable on the web otherwise.

I think the hope people have for the removal of section 230 shielding is that Twitter and other content hosting providers will take existing laws more seriously. For instance, Cloudflare today says "there should be laws to handle this stuff, we don't want to enforce anything", and to date the CEO has only ever made two exceptions to that.

The contrarian side to that is going too far and Twitter et al becoming too censorious and taking down legitimate free speech content. None of these service providers can afford to have a legal team on standby to determine what constitutes fair use and free speech or not.

I don't have an answer, I'm just saying this isn't a one-sided issue. Right now the internet has a real problem with libel and cyberstalking. It's one of those things that one tends to not realize or think/care about until it happens to them.


Twitter already does take down legitimate free speech content. Their terms of service are far more restrictive than the US Constitution. And sometimes Twitter management takes down content or bans users on a purely arbitrary basis just because they don't like it (or they think their major advertising customers won't like it).

There's nothing special about Twitter or any other Internet site. If someone libels you then you have recourse through the civil courts. And if you file pro se then it's very cheap.


Misinformation and lies are rarely punishable by either criminal or civil statutes, unless they are defamatory about an individual, and even then the bar is very high, especially if they are a public figure.

Repealing 230 would not have any measurable benefit when it comes to controlling fake news or conspiracy theories.


If AI can do such things then we’d have much bigger questions to answer about how tomorrow should proceed, and what legal framework is right for the coming of a new era.

In fact, we’d start talking about AI courts to scale with the flood of low severity cases.


What I'm trying to fix is the current situation where social media companies have carte blanche to abuse their position as the broadcast point to shape the public conversation.

Most people don't seem to see the problem because so far the power has only been used for things they agree with, but consider if it weren't.

Suppose Facebook was run by anti-vaxxers. They manipulate their algorithm so that posts that question the safety of vaccines are promoted and posts that say vaccines are safe and helpful get buried. If someone takes a strong pro-vaccine stance, Facebook bans them for spreading "dangerous misinformation".

Under the current system, this would be perfectly fine for Facebook to do, despite the fact that this could completely warp public perception because Facebook is where a lot of people get most of their information. After all, they are a private company, right? They have no obligation to host information they disagree with. Yet, Facebook could, if it decided to, dramatically shift public opinion in any way it wants to, just by manipulating how information is presented.

The world I would rather live in is one where social media companies are limited to removing information only if it is clearly illegal, by explicitly defined terms (such as child pornography) or spam. The rest should be left up to individuals to decide for themselves.


Don't have an obligation to censor it, sure, but they'll never have an obligation to keep it up. If they run out of money, the servers go down and the posts stop being available.

Unless the government is taking over paying for hosting the user content, the server owners will be able to not host it


I don't know. If I want to publish something I need a tool that publishes it. If everything can just vanish at any time it can have terrible consequences. Competing with businesses that are allowed to make false promises is really hard. I would have much rather paid in stead of losing the audience and the communities I build again and again and again and again until I just gave up. Before I deleted everything I had many thousands of well organized bookmarks gathered over decades. 90% of it was dead links. My blog is 90% dead links. Many people have some perverted fetish with deleting the proverbial geocities, I have hundreds of articles that don't function without. I'm not allowed to republish [your] lost content either.


Your post is really confusing.

S230 says that a company won't be held responsible for some (but not all) user generated speech on their website. But it also says they do not lose those protections if they moderate that speech. The people who made the bill realised that companies need the ability to moderate content, and so they built it into the law.

The vast majority of Internet users do not want dumb pipes and unmoderated content.


What the parent is saying is pretty plain. If <social media company> is moderating content then they should be liable for it. If they're not liable for it then they shouldn't be moderating it.

I understand that point of view but honestly it won't work for the simple reason that no one - not the providers and not the users - wants it to work that way. I firmly believe that if you try and setup the legal framework to get that configuration people will create technical work around after technical workaround until they get back to the status quo.

It will be like nothing so much as the way SPAC's are used today. Whatever else they are they're a way to do an IPO as it was done before SOX. It's a technical end run of a law no one likes.


You don't need any kind of standing to do what you're not prohibited to do by law, and the freedom to choose what to publish and amplify -- not because it is obligated to but to shape the institution's reputation -- is both well-established and crucial to both media organisations and universities.


What you’re describing is an untenable situation. So, let’s say that Twitter operates as you suggest, as an open platform, and people start posting child pornography, multi-level marketing scams, and racist imagery. Do they have to just suck it up and wait for the police and courts to tell them to take it down? In the mean time, they’ll lose all their customers. If you say, well of course they can take down kiddie porn, then you agree that they have the right to curate what is posted on their site, just not how much.

Let’s say I run a BBS for stamp collectors, and a crowd of new members join and start talking about their upcoming white supremacist rally on the site. Can I delete their posts and ban them while still having safe harbor under 230? Or should I instead lose my 230 immunity and be forced to face liability over that post where Fred calls Jenny a nitwit loser because she got the date of the Amelia Aerhart first-day cover wrong?

What if they’re just talking about knitting, on my stamp collecting forum? Then can I take it down?

What if my forum is for young Democrats? Can I take down posts supporting Republican candidates?


The New York Times frequently publishes editorial content for which they pay nothing. People who aren't NYT employees want to get their message out so they write for free.


Since we passed a law that says they can have it both ways they can indeed have it both ways and moderate as they choose while also avoiding legal liability for their users actions.

Which part of this is problematic?


That they escaped the legal liability despite clearly having the capability and intent to moderate. The reason the exemption was put in place initially was because they had no editorial goal and could not reasonably moderate users.

Now they are no different than the NYT with a crowdsourced author pool except they aren’t liable for libel.


How can youtube personally vet years of videos that are added in a day.

Hell even this sites moderation team can't read all content in a reasonable time frame.


Yet they had done so for people who hinted at election fraud, anti-mask info, etc.

They clearly have the capacity to quickly speech recognize audio and match on keywords. They could drop all videos that even mention Taiwan tomorrow.


How do you know they aren't relying on user reports and tracking who gets reported for what?

If 17 people per video report election disinfo over the majority of the content an account posts maybe they post election disinfo.

It's still impractical to remove enough to avoid being financially destroyed by even a minority of bad content if they were personally responsible for their users content given a single case could incur a 6 figure cost to win an unreasonable case.


There is someone deciding on what content to post with the New York Times. If a site like twitter is engaging in that activity - then they should loose 230 immunity, because they are acting like a publisher instead of a platform. It seems relatively straight forward, doesn't it?


It is not straightforward, because the law doesn’t draw that distinction. Additionally, almost any “platform” has to reserve the right to moderate content to keep its customers and stay in business. Nobody wants to advertise on a site that is filled with white supremacist memes and pornographic material.


I don't understand why someone would down-vote this, please explain.


Sometimes downvotes are used for disagreement instead of a signal that a comment thread is off-topic or boring. FWIW asking why you are downvoted also makes for boring reading so is also discouraged. I'm somewhat doing the same to explain this but I'd rather you know for future discourse.

As far as why they disagree, I'd suspect it is something more or less in the same ballpark as the below.

Excerpt from https://www.techdirt.com/articles/20200531/23325444617/hello...

> If you said "Once a company like that starts moderating content, it's no longer a platform, but a publisher"

> I regret to inform you that you are wrong. I know that you've likely heard this from someone else -- perhaps even someone respected -- but it's just not true. The law says no such thing. Again, I encourage you to read it. The law does distinguish between "interactive computer services" and "information content providers," but that is not, as some imply, a fancy legalistic ways of saying "platform" or "publisher." There is no "certification" or "decision" that a website needs to make to get 230 protections. It protects all websites and all users of websites when there is content posted on the sites by someone else.

> To be a bit more explicit: at no point in any court case regarding Section 230 is there a need to determine whether or not a particular website is a "platform" or a "publisher." What matters is solely the content in question. If that content is created by someone else, the website hosting it cannot be sued over it.


Had to create a new account, and use a vpn to reply to you. I broke some rule that isn't clearly defined on the site, and it's not indicating when I will be able to post again. Is Hacker News actually a place for free discussion of curious minds?

Thank you for your answer - It's helped me in multiple dimensions. The sentiment that important discussion need be entertaining makes me sad, but it is what it is. Be well.


It sounds like you like a lot of people have insufficient understanding of what 230 is. Most people who express similar theories really just want the government to make it illegal to moderate deplorables with a thin justification. It's easier to vote down and move on the 95th time someone makes a bad argument.


What is a “deplorable”? If you’re using it in the same context as Hillary, that’s not really helping your argument because it’s an ill-defined slur used to describe millions of people.


A deplorable defined one who holds odious beliefs or views that aren't merely wrong but actively harmful to themselves and others.

Bigots, anti vaxxers, anti science (the process not a particular theory), people who support violence as a means to political change it n functional societies.


Yep, keep in mind she described half of the people who voted for Trump like that (https://en.wikipedia.org/wiki/Basket_of_deplorables) so the word basically means “Republican”. Using it is only useful as a Shibboleth to indicate you’re aligned with American leftist views.


The republican party in its present incarnation is a deplorable group with no ethics and bad intentions. It embodies none of the virtues it purports to represents and many vices. The only non deplorable reason to support it is to obtain tax breaks which still means that you are putting a lower tax rate ahead of everything else including ethics, your fellow man, and even basic competence.

Deplorable aptly wraps this up in a bow whereas the converse charges of "Marxist" "Communist" "Anarchist" are entirely detached from reality.


> The republican party in its present incarnation is a deplorable group with no ethics and bad intentions.

Source? It’s not clear to me how an intelligent person could think this about 40 million people without some severe brainwashing. Rather than “this group appears to have different priorities than I”, it has become, “these people clearly have no ethics”.

How would you feel about right-wingers saying the entire Democratic Party has no ethics and bad intentions because it continually supports baby murder? If you don’t think that’s a fare assessment of the party’s ethics, perhaps you should reconsider how you arrived at your conclusion.


For folks (like me) who found the link a bit dense: https://en.wikipedia.org/wiki/Miami_Herald_Publishing_Co._v....

> Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974), was a United States Supreme Court case that overturned a Florida state law requiring newspapers to allow equal space in their newspapers to political candidates in the case of a political editorial or endorsement content.


As absolutely not a lawyer, and acknowledging my huge ignorance I'd suggest that First Amendment processes are heard in court and protected by due process. Aren't they?

For good or ill, §230 bypasses court hearings and due process and so I wonder if it is

a) itself an unconstitutional denial of rights to the users, or

b) actually just fine legally, however an overturning of it would not necessarily be an assault on the First Amendment, only on this congressional shortcut

My "reform" of §230 would be to add on to this congressionally mandated shortcut with some form of due process to the users whose court rights have been bypassed -- if a site wants to use §230 protections, then they have to provide some form of due process to users, perhaps a timely takedown/suspension/banning appeals process, held in the open

If a site doesn't want to provide that, then they can avail themselves of the First Amendment and their §230 immunities are stripped and they are open to lawsuits.


I'll post the same obligatory "actually read your Constitution" note:

The first amendment protects US citizens right to speak freely in public and private venues from being retaliated against by the government.

It specifically means you cannot be denied government services, support our rights because of any opinions you express publicly or privately.

It does not and never has obligated any private persons or business to listen, rebroadcast, or not *react" to what you say. You have never been protected from the consequences of your speech within your community, nor has anyone been required to enable it. It has never been a protection against the speech of anyone else, for example rallying their community to speak against you or for other private services to deny you patronage.


> It does not and never has obligated any private persons or business to listen, rebroadcast, or not *react" to what you say. You have never been protected from the consequences of your speech within your community, nor has anyone been required to enable it.

So there might be an example of the supreme court requiring just that: in a case called Marsh vs. Alabama, a private community (a company town) was forced to allow some mormons to keep door-knocking on private property because of First Amendment rights.

Something to the effect of 'if you have enough control over private space, you start to take on an increasing blend of public square obligations'. At least, that is the conclusion I drew from the below article. Unfamiliar domain name but I got it from memeorandum so it's not afaik some completely off the rails screed.

https://lpeproject.org/blog/after-the-great-deplatforming-re...


Marsh v Alabama is a pretty narrow ruling, and hasn't been interpreted to apply to much more than company towns, which are now illegal.

Cyber Promotions v AOL is a subsequent case which is much closer to today's question: Cyber Promotions wanted to spam AOL, and AOL wanted to filter out spam because it threatened to ruin the internet. Thankfully, AOL won that case, and spam filtering is constitutional.


Apologies, but it's not clear to me the relevance of what you are saying to what I have posted.

I don't think I'm asking for freedom from consequences for anyone, just the reverse of anything, I think sites should be granted 230 immunities but only if they provide some form of due process to users, and if they don't, users should be able to take sites to court just like they could if there was no 230.


The language you use seems to indicate that you either assume the First Amendment to be synonymous with freedom of speech, a common misconception, or that you believe the First Amendment would apply to tech companies currently protected by Section 230 were it repealed.


> or that you believe the First Amendment would apply to tech companies currently protected by Section 230 were it repealed.

Is that not what Tim Wu is saying?

> But content moderation, as an exercise of editorial discretion, is protected by the First Amendment. And that Congress can’t repeal.

And so my understanding is that

1. Site content moderation actions are protected by the First Amendment.

2. Gov't can't tell a site what to moderate or not.

3. But without 230, a user can potentially sue a site for defamation or other reasons.

4. 230 provides a site a bypass to those suits, it gives sites publisher immunity.

My suggestion is that publisher immunity from user lawsuits should come with some guarantee of due process. Congress took away the ability of users to sue. My suggestion is that seemed reasonable in 1996, but today Congress should return to the user some ability to negotiate/talk/appeal to sites regarding their takedowns/suspensions/bans. I refer to that as a form of due process. But if you wish, call that a consumer protection law.

I've mentioned this twice now, and people tell me I need to read the Constitution or that I am confusing free speech and the First Amendment.

I definitely have no idea what you folks are seeing, and wish you could more clearly express your ideas and help with that.


What sort of suits do you wish could proceed?

When you say due process regarding being banned what would that look like? Why should anyone have to justify to you why you can't use someone else's website?

Under what terms and situations would they have to reinstate you? Why?


Saying I'm not a lawyer does not excuse you from doing the slightest bit of research.

You never had a due process right to be heard on facebook in the first place so 230 didn't abridge this completely fictional right.

It wouldn't abridge site owners rights to remove 230 it would just break the internet as we know it in the US. Your suggestion isn't a lot better. It would create pointless process that would likely be abused by vexatious litigants so you combine doing nothing for average joe's with giving special interests lisence to ruin the internet.

If you have not the slightest idea what the law us how can you hope to anmend it.

How about we leave everything as is and if you don't like how facebook runs their show you make your own site...with blackjack and hookers if desired.


I'm not sure how section 230 bypasses due process. If a social media site removes something you posted, your first amendment rights have not been violated. The first amendment only protects you from being censored by the government. It says nothing about a private business enforcing whatever arbitrary rules it has against you.


due process may not be the most appropriate or in any manner appropriate terminology.

but prior to 230, I could sue a site for distributing defamatory material.

Congress removes my right or my ability to do so. It gives my rights away to the site who it provides 230 immunity to.

Sites and society may have benefited from this trade, but individual have lost fundamental abilities to seek their day in court and have gained nothing.

I think Congress should temper 230 by saying that if a company accepts 230 immunity from lawsuits, it needs to provide basic due process rights to appeals processes to users.

If it doesn't want to provide reasonable appeals processes, it forfeits its 230 immunities and can seek redress in court.


You still have the right to sue the originator of the content, Section 230 just recognises the reality that platform owners (from Facebook down to little guy with a comment section on his blog) are not the originator of the content just because they filter spam and/or [occasionally] delete something manually. Without that, they would of course have deleted a lot more users and a lot more content a lot earlier, because who wants to pay the legal bills for defending some random citizen's claim about a person or company?

How would a "due process" proposal even work? Do we have the US government step in and set global rules determining what is and isn't legitimate speech and who should have posting rights on your website? And if so, how is this making the Internet more free?


This proposal starts looking weird as soon as you go into detail because it ties one persons rights to redress in court with the due process accorded to their opponent - because any due process would start only if a takedown (a) happens and (b) is appealed.

I.e. if user A makes a post that defames you, you complain, it gets taken down, user A makes an appeal according to the new "230+ process" and gets it restored - then you'd have no redress in court because the provider followed the due process. (in any case, due process would be about the process of evaluating whether the post meets some editorial criteria, but the criteria themselves can be absolutely arbitrarily set by the platform; if they decide to ban the posts which contain the letter "a", that's compatible with due process, as long as they look in the appeal and point out that yup, there was an "a" in it so the ban was appropriate; and if they decide to ban only posts which they're absolutely required by other laws and leave everything else, that still fits due process).

In the opposite scenario, user A makes a post that might defame you but it gets immediately taken down by an automated algorithm; user A complains but gets auto-rejected without due process - so then you'd have a right to redress in court, but for what? The post got taken down.

And if you had in mind right to redress in court for the person making the post, they don't have any valid claim pre-230, during 230 and in your proposed scenario either way.


You can ask the website to take it down and you can sue the person who posted it why isn't that sufficient?


But at what point does content moderation turn an editor into the speaker and thus liable? As an extreme case I could censor many letters of your post leaving only a few letters expressing something you never expressed. Many people I've heard from on the right are making this case, not arguing to abolish 230, just to clarify it such that moderation is a form of speech and thus invites liability. Even Dershowitz made that point.


Selectively censoring letters so as to change the meaning of a post is not what is commonly thought of as moderation.

Anyway, I think you’re losing the bead a little here. Speech isn’t liable because it’s speech. It’s liable because it’s slanderous or defamatory. So calling moderation “speech” doesn’t suddenly invite liability, unless we’re claiming that the very act of exclusion is prima facie defamation, which is a more dangerous idea than the problem you’re trying to solve in the first place. If moderation as such were to be actionable, then my shitty band could sue you for not including us in your shared Spotify playlist. It would open the door for any sort of public curation whatsoever to carry legal risks.

Losing the tool of moderation would kill off this very forum, which would become overrun by crackpots, trolls, and V1Agra spam in short order. By pruning the weeds of bad-faith discourse, moderation allows good-faith open discussion to thrive.


Censoring? I've had posts EDITED on a stack exchange platform. Multiple times, with the result saying the opposite of what I said.

I did the only thing I could do, stopped using that site. Just like with Facebook :)


Section 230 is clear on that, isn't it? Does the moderation happen before or after the post is visible?


On major platforms it's both


Wu is amazingly dead on.

Wu's new boss has been parading some wholly wrong assumptions about Section 230. I hope Wu can convert him into the highest ranking politician who understands 230 in a non-delusional way.

more reading: https://www.techdirt.com/articles/20200531/23325444617/hello...


Double think, double speak are alive and well. Using section 230 in an intelligent manner would require discernment. That doesn't forward anyone's agenda.


> We don’t like you; we want you to suffer. Very 2020

That seems to sum it up.

I don’t think there’s a way to legislate the way we treat and feel about each other.

I guess we need to do that the old-fashioned way, by getting out of our pods and talking to each other in person, like the ending scene of Surrogates.


(In case anyone doesn't recall, Tim Wu coined the term "net neutrality" and wrote The Master Switch.)


Excellent book! Made a good case that realize that Internet too was designed to centralize control. As long as the few controls the Internet infrastructure and the space satellite lane, it is NOT FREE.


What if these platforms defaulted to showing everything users opt into, like before the AI started curating for engagement. Then they provided users a choice of curation algorithms tuned in various ways.

Then so long as the platforms are removing clearly illegal content they would comply without engaging in editorializing.


There is someone deciding on what content to post with the New York Times. If a site like twitter is engaging in that activity - then they should loose 230 immunity, because they are acting like a publisher instead of a platform. It seems relatively straight forward, doesn't it?

Seems like people aren't thinking very well these days.


> Seems like people aren't thinking very well these days.

They are thinking that they want to control all content online, that's all.

They just aren't going to tell you that.


The tech companies are acting in co-ordinated effort to fortify elections (NY Times article). In helping to elect their preferred candidate, hiding negative stories, and skewed the algorithm for one candidate. At the very least they are making a massive campaign contribution, and section 230 gives them immunity to all the laws that apply. At the worse, they are spreading political propaganda. The anthesis of democracy. They also would not have gotten into market leader position had they disclosed their plans to do this. So I feel its a type of Fraud. Just like Joe Rogan lying to investors about not being censored on Spotify. When it now turns out he is. He personally gained 100 million from this lie. And Spotify investors got scammed. I hope more laws do get made so this kind of manipulation is punished.


What are “the laws that apply”? Do you not feel like Fox News and OAN made billions of dollars in campaign contributions to Republican candidates? What repercussion did they face? Heck, remind me how we punished the government employees who violated the Hatch act and campaigned for the president while on the government payroll.


I agree with you. Its a problem that journalists are acting like advocates, or political activists. TV stations don't have section 230 protection. So at least there is some redress for defamation. The Covington kid, Nick Sandman, won a multimillion dollar settlement recently from CNN. However, its not looking like the suit against twitter for being called a "Hacker", by the Store owner, that turned over Hunter Biden's laptop to the FBI will succeed. Twitter called the contents of laptop hacked material, and even banned legitimate newspaper account for a week.

The implication of the "hacked material" was that the store owner was a "hacker". The store owner claims the laptop was left behind in his repair store, and he legally gained possession of it. He sued twitter for defamation.

Because of section 230 protection, the Judge will likely dismiss this case, on section 230 protection grounds. Something that would not happen if this was done by a newspaper.

I think we want conflict of interest laws, that force disclose if you are paid to post, or materially manipulate on behalf of a candidate.

In ideal world, I think Journalists should have an ethics body, similar to Engineers. So that at least in the most egregious circumstances, If you are found to be a paid and not disclose it, than you should not get to call yourself a journalist.

I watched this American travel Youtuber that lives in China, and does travel promotional content for them sometimes. He come out in favour of a candidate before the election. If he received money for doing that, I think there should be a law, that it should be disclosed by him.


I could be wrong, but if Twitter spoke as itself and said “Hunter’s laptop was hacked,” then they can be held liable outside of 230, because that is their own speech. There’s no magic loophole where a platform’s own speech can be immunized. However, this sounds like a horrible case, if they didn’t specifically accuse someone of hacking it. In addition, just because the shop owner reportedly took title to the laptop hardware when it was abandoned, doesn’t mean that he took title to all the material on the laptop. I doubt anyone is going to find someone guilty for defamation for calling the doxxing of someone with data from their own laptop, abandoned or not, “hacking”.


I feel empathy for the store owner. He legitimately got in a situation in which people get whacked like in the movies. After he disclosed the laptop to the FBI, the FBI threatened him. So in such a political case, judges and the whole corrupt system is legitimately out to get you. The fact that he made copies that he gave to press may have saved his life. So I can't really blame him for any leaking anything either. Under the circumstances it was probably the smartest thing he could have done.

The judge in the lawsuit, already made some rulings invoking section 230 protections


> In helping to elect their preferred candidate

If they were actually trying, then we don't have much to fear from them.


Debating whether repealing Section 230 will suddenly grant people free speech on social media platform is missing the real issue. We should repeal Section 230 in order to destroy the social media industry.

Online you do not interact with "regular people" but people who put the most effort into online presence and those people are overwhelmingly individuals with very real mental disorder on the narcissistic spectrum often with other comorbid disorders.

On social media in particular you are exposed to both narcissists and in general a narcissistic mode of communication - because that's what social media is. Social media is a vehicle for turning an individual into a narcissistic persona. So even if you are not narcissistic in real life your social media activity will make it look like you are.

Social media is very unhealthy. It is a recipe for a mental disorder. It is primarily used by people with mental disorders. It is designed by people with mental disorders.

Quit social media—these online free speech advocates just want more people to pay attention to them which leads to more screen addiction. It's cancer for your soul. And I am not sure I am just being metaphorical.


You do realize you are writing this on a social media platform, right?


The key difference is AI curation. What you see here depends on what time things get posted and how many up-votes posts get. FB, twitter, et al, decide what to show you based on what they think you will like using opaque complex software.

If a newspaper publishes a 'letter to the editor' that is libel they are liable because they chose it to publish. that's what the big tech social media do.

If section 230 got repealed it would have effects on sites like HN, but the would be catastrophic for big tech social media.


> What you see here depends on what time things get posted and how many up-votes posts get.

Is that not a large part of how most social media algorithms work?


It appears sites regularly add or remove items from their trending lists, both by manual review as well algorithms which are essentially automated manual review by the person who wrote the algorithm. That kind of control is the same as a Newspaper determining what headlines you read and what stories get the most attention, basically controlling the conversation and making editorial decisions.


The collateral damage would be unacceptable. Goodbye any comment section, goodbye bulletin boards and forums; the damage may even extend to things like MMORPGs.


> It is primarily used by people with mental disorders.

You've been on HN for a decade. Repealing Section 230 would kill it just as dead as the other social media platforms.


So basically “ We don’t like you; we want you to suffer.”

We don’t bans or destroy things just because they contain narcissistic people. We certainly shouldn’t lump all social media into this camp.


230 protections needs to be something that can be challenged in court. Today, it is abused worse then the DMCA and there is no recourse. Good luck trying to strip protection from an entity that is essentially a publisher.

I think companies would be very honest if there was risk of repeated and consistent infractions.

I wholly believe in reform which brings some power back to the consumer.


What would you like to challenge them on? Are you thinking that stripping their 230 protection would essentially put them out of business, and therefore force them not to discriminate against viewpoints?


> these online free speech advocates

No, I want to challenge cancel culture and all the other radical activism that’s plaguing the country.

If you want to affect real change let’s reintroduce FCC fairness doctrine and prohibit the Sinclair group from giving talking notes to local media outlets.

The media is far more toxic than social media.


I'm not 100% sure if section 230 should protect those platforms providing anonymous speech. While anonymous speech is important for free speech purposes, section 230 is about shifting the liability of the speech from the "publisher" (facebook/twitter, using air quotes as publisher isn't really accurate) to the writer. However, if these platforms are providing anonymous speech, then it makes it much more difficult to actually go after the writer.

perhaps one could argue that by providing such a speed bump in the process of figuring out who the writer is facebook/twitter are aiding and abetting said speech and hence have liability from it. And that service providers over a certain size either cannot allow anonymous speech at all, or at best have serious KYC rules with straight forward demasking.

I don't like where my thought experiment leads, but I really dont have a good answer for this.


So, is "compsciphd" your family name or your given name? Have you really stopped to think these things through? You and every single other person posting here, on Hacker News, with pseudonyms including one time throwaway accounts, HN a "platform providing anonymous speech", do understand that 230 makes this place possible right? Right? Are you suggesting dang and co should not be allowed to moderate ever again without Y Combinator becoming legally liable for every random thing here? That we should all have to hand over government ID to post here?

This is fucking nuts. The good answer is for people to stop giving so much credence to anonymous speech without thorough analysis, or at most to have a more streamlined process to get it taken down if it's not defended.


so lets take a related solution that combines both of what we said.

The ability for everyone to get the equivalent of a blue check mark, i.e. this person has been verified as "real", there only exists one independent account associated with said "real" person (but perhaps can have multiple non independent accounts, that all are publicly associated with one identity. an example would be someone who has a public identity and a friends and family identity on social media, they would presumably tie their private identity to their public identity and both could be marked as "real").

then one can easily distinguish people who are "real" and those which are anonymous. Might also help with bot issues on these networks.

also my identity here isn't particularly anonymous. I've linked to my work often enough that anyone who cares would know who I am.


This argument isn’t wholly unreasonable. But it would lead, in practical terms, simply to the abolition of anonymous and pseudonymous accounts on social media.

I could live with that, but I get the feeling those calling for Sec 230 abolition want something entirely different.

I’m also not entirely sure what sort of “liability” people are talking about. In the US, there is very little liability for any sort of speech. Insults, no matter how gross and objectionable, are generally protected as simply being subjective opinions, for example. So is blatant racism.

Only statements of facts that are false, made knowingly and with malicious intent, are actionable. I believe very little of what people object to on social media actually falls under that definition.


so you're right, but part of the problem is that there is no personal consequence to this anonymous speech. It's also impossible to distinguish between "fake/non verified" users and "real/verified" users on a mass scale. So, I modify my proposal to argue that if we don't want to get rid of anonymous/pseudononymous accounts, we have to take steps to enable users to really devalue the anonymous speech and not put it on the same level as non anonymous speech.


The platform may publicly provide a forum for anonymous speech, but that doesn't mean it is anonymous. One of the most famous forums for anonymous speech is 4chan's "Random" forum, and even there prosecution was possible against a user. The site owner cooperated with investigators. This wasn't a surprise to any long-time users, because everyone knows a site that doesn't cooperate in criminal investigations is not long for this world.

http://www.thesmokinggun.com/buster/fbi/turns-out-4chan-not-...


in legal cases, its possible. But it removes any consequence for the anonymous speech that is not legally actionable. While there's value to that type of speech, it should also be devalued by others.

So, see what I wrote above as a modification of the proposal. To encourage all users to be verifiable, for end users to be easily able to identify non verified users, and hence to be able to devalue those opinions.


First, repealing Section 230 is different from reforming Section 230. I mostly see calls for reform - not repeal.

Second, many people are debating how they think or believe it should work.

Let us instead debate the actual content of Section 230 [1] law.

[1] https://en.m.wikisource.org/wiki/United_States_Code/Title_47...


It's worth remembering that up until very recently section 230 was invoked in these debates mostly by people lying about what it says. When such people change their tune and now say we have to reform or repeal it, I think that should be met with a fair amount of skepticism.


Are you sure people are "lying" about section 230? Or maybe they have different understanding / misunderstanding? Is "lying" becoming another word lost its meaning?


Others have interpreted these debates in a similar way:

https://popehat.substack.com/p/section-230-is-the-subject-of...


Worth reading this post on how section 230 is often misunderstood:

https://www.techdirt.com/articles/20200531/23325444617/hello...



Repealing 230 is the MAD option. If you're not willing to play ball, we'll destroy the ball.


I see three problems with the original posting. First is the presumption that repealing 230 will damage the internet in some way. I think that would have to be demonstrated in some way. tech companies already must have a strong legal department in 2020, especially the smaller unpopular ones fighting for basic banking and internet connectivity. Donald Trump the big defamer as the OP claims, only had one defamation lawsuit, from Stormey Daniels, which went no where. so I don't see how all of a sudden more lawsuits are going to jump in just because they can add twitter as a defendant.

the second problem is the misunderstaning of what conservatives want. while it is correct to say that we want the terms of service to be applied consistently, and we disagree with vague labels of hate speech and disinformation, that is nothing like the fairness doctrine which is about balance. the core of what conservatives are asking for is that discussion over what government policy should be should not be censored by these platforms. when we have serious concerns about lockdown policies, election integrity, keeping predators out of girls bathrooms, preventing illegal immigration, these very discussions are censored by these platforms. this is dangerous, of course, because it allows our elected leaders to ignore potentially widely held views, and only see types of conversations that the left leaning heads of media and big tech permits.

the third problem is the presumption that the left wants more censorship. while its true that the advocates for more restriction of potentially harmless speech because it is potentially harmful are on the left, there are studies that show people on the left feel more restricted about what they can say and see repercussions on. while there will always be some on the left that truely believes removing dissent is would be an improvement, that isn't widely believed, as anyone can figure out by talking to people with differing political views.


There is a huge failure of imagination here. Repealing section 230... Would it completely upend existing social networks and internet norms? Yes, that’s a good thing.

Tim talks moderation... we’re not going to moderate billions of people post-230, they’ll just be responsible for their actions.


They’re responsible for their actions today.

230 doesn’t immunize commenters from liability for the words they post online. It protects the platform, while also allowing them to moderate.

Post-230, platforms would have to zealously moderate to avoid repercussions. You think bots are a little too aggressive today. Imagine those auto-van dials turned up to 11 if Twitter bore direct responsibility for every word tweeted out.


I don’t want them to moderate. Many will/are unhappy with them moderating content. Allow people to moderate their own content.

There is a huge market for shifting content curation and moderation to users and away from platforms.


Every site needs moderation. Otherwise it only takes 1 bot posting non-stop racial slurs to make a website completely unusable.


Why do we have to be broadly in one of these categories?

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

This should only be true for platforms that don’t algorithmically curate the content that users see, or platforms that don’t editorialize. I don’t care about hypothetical censorship or bias toward conservatives. I want platforms to stop algorithmically curating content. I want platforms to stop editorializing.

These companies are choosing the content that users are seeing, and they are editorializing that content. They don’t deserve Section 230 protection, and they shouldn’t have it.


" it is possible that what conservatives want is not to repeal 230, but explicitly condition immunity on a “fair and balanced” content moderation — i.e., a social media fairness doctrine backed up a threat of immunity-stripping."

This makes a lot of sense. The idea of making forum owners responsible for the content their users post would result in far harsher moderation. Which is opposite to what they want.

Have the calls for Section 230 repeal subsided since the change in administration?


>The right-wing fantasies about 230 repeal are even more off base. For one thing, without Section 230 immunity, a figure like Donald Trump would almost certainly be kicked off Twitter, because he constantly defames people.

Tim Wu seems to be misrepresenting what the right wants, or living in a left wing bubble. Repealing 230, for the right is about scorched earth. Deplatform the left too, by repealing 230. The right has already been deplatformed. Trump was already kicked off Twitter. The right has no expectations of Trump returning to twitter. Repeal 230 is about destroying twitter, and all the other left wing social media too.


Agree. And the competition alternative approach would have more weight before the coordinated attack of big tech against Parler.

And I am not convinced that social media is something that we will miss. A decentralised approach with lots of independent websites, each responsible for their own opinion, is probably a healthier ecosystem.


Online social media is simply a threat to monopoly over more traditional information grooming that goes on in certain communities.


There's almost no such thing as "fair and balanced" content moderation. Wu's own bias leaks through in this article, as he lists right wing conspiracy theory but fails to recognize that mainstream news pitched a left wing conspiracy theory against Trump for the better part of 3 years: the "Trump is a Russian asset and puppet" theory.

As he states that the left wing is concerned with violence, they failed to take any responsibility for their own rhetoric which lead to riots most of last summer in many major cities.

As Wu says, the left thinks "We have a huge problem with fascist disinformation and propaganda". But what they fail to realize is that they've deluded themselves into believing that garden variety conservatism is fascism, because the f word was repeated loudly and often enough by people with influence, that people started to believe it. Nevermind that a healthy democracy demands at least two healthy and functioning parties or you might as well be North Korea. Nevermind that American conservatism has never been about fascism, ever. People are abusing a term, deluding themselves of its validity, flattening other people, and using it to justify the deplatforming of half the country.

So much of what we see in politics today is an escalation of reaction and counter-reaction.

I would argue that if you are a conservative and you haven't at least flirted with the idea of becoming more of a liberal, if not gone through that paradigm shift yourself, without losing your respect for those that hold conservative values, or vice versa, then you can't even begin to attempt to be unbiased. Unless you can hold both paradigms in your mind without contempt of either, you can't begin to be neutral. So few people are capable of this. I think less then 5% of the general population are, in today's heated political arena. And you can't expect institutions which bias their hiring towards one paradigm or the other to be neutral. If Jack Dorsey is a super liberal guy and he doesn't have a neutral personality himself, how can you expect the culture of Twitter to be neutral? It's far easier for Dorsey, and his Silicon Valley buddies to surround themselves with people from the Valley, or those willing to move to the Valley (who tend to co-opt themselves into the politics and worldviews of the Valley in order to fit in).

The reason we feel slighted by Facebook and Twitter in a way that we don't feel towards MSNBC or Fox, is that we basically have to opt into news channels and we always have the option of opting out of cable news altogether. But tech has evolved to the point that you can't easily navigate without touching at least one of these homogeneous, corporate, left wing monopolies. The whole idea of Section 230 was to promote the vibrant free speech discussion forums that existed in the 90s (and still do on dark corners of the world wide web today). It's also evolved to the point that algorithmic feeds have been fine tuned to manipulate human emotions in a way that the 90s framers of the section 230 law could not foresee. Social media and tech companies can alter your perception of reality and regularly do.

I agree with using antitrust laws to bust up the multi trillion dollar behemoths. But I don't think "fair content moderation" is a realistic goal, given how few people are even capable of it and given that much of the problem is systemic to the homogeneous monopolies themselves. Ultimately, competition will solve this in the long run, I believe. But like a forest that has grown dangerously thick, it might be time for controlled burns of these corporations to spawn a healthier digital ecosystem.


The term "fascist" now means "anyone who disagrees with me."


It's not a new phenomenon:

>George Orwell wrote in 1944 that "the word 'Fascism' is almost entirely meaningless ... almost any English person would accept 'bully' as a synonym for 'Fascist'".

https://en.wikipedia.org/wiki/Fascism#%22Fascist%22_as_a_pej...


Depends which side you're on. The other side tends to use marxist/socialist/communist pretty interchangeably.

I'm also sure many of the people who call each other these names have little idea what the terms actually mean.


It’s deeper than that. I think many are simply labeling those using fascist techniques as fascists. Let me explain:

During the Great Depression, very similar rhetoric was used to create the authoritarian fascist regimes. One example is the false narrative that one could blame all of their economic problems on certain racial/ethnic groups. The horrid rhetoric suggesting that Jews caused the financial crisis in 1929 is very similar to the horrid rhetoric saying that Mexican immigrants are stealing American jobs and causing widespread economic devastation.

We _should_ be very concerned to see concerning patterns repeating themselves. Frankly, if the Republican Party was all about “garden variety conservatism”, it would be banishing the extremists from its ranks. That’s not happening. Are republicans fascist? Maybe not. But do several use fascist techniques? Definitely. And those not actively using those techniques are complacent or even accepting.

“Garden variety conservatives” did know what Trump was doing in 2016, and they called him for for it in the Republican primaries. Though I have no rosy view of the Conservative party before then either, it has now overtly embraced many fascist concepts.

Though people would still disagree with “garden variety conservatives”, people have a much bigger problem with fascist concepts.


If you start calling "a thing that fascists also did" simply "fascist" then you're watering down the term quite a bit. It could extend far further than most people would find reasonable


Similar to how white people are blamed for everything today?


Yeah, the mainstream media ended up getting behind some really nutso anti-Trump conspiracy theories. One of my favourites was the one about Trump supposedly using DNS requests from email servers as a secret communication channel with Russia (and this one US chain of health clinics) because it made absolutely no sense on any level.

It would make such a poor communication channel that using it would require some other communication channel that's better in every way and hasn't been found to agree what the supposed communications meant first (so why not just use that?), could only have been set up with the help of a subcontractor of a subcontractor who Trump had no reason to trust and yet who insisted no such thing happened, and all the evidence fits the alternative explanation of the DNS lookups being a normal response to receiving normal marketing emails about Trump hotels. About five publications rejected it for these reasons before it got published. Yet when it did, the New York Times got so much heat for pushing back in the gentlest way possible against the Clinton campaign's demands for the FBI to investigate - which went viral on social media - by saying that the FBI had looked into it and concluded all the evidence was consistent with normal marketing emails that they eventually said it was the wrong decision and they wouldn't do it again. And the only thing that was described as a conspiracy theory by the mainstream press was the idea that all the evidence was consistent with the normal operation of email systems, even though that's what most technical people concluded regardless of political affiliation.

There was some really heinous bullshit around the legitimacy of the 2016 election as well, which meant the press really didn't have a leg to stand on when then opined about how dangerous it was for Trump to undermine the legitimacy of the 2020 one. Though of course that didn't stop them.


Interesting that you brought up that DNS story, because an interesting link posted here at least a few times suggested those were fabricated. It was an interesting analysis. Check it out: https://weaponizedautism.wordpress.com/2017/04/09/trump-dns-...


Conservatives want political affiliation to be a protected class. It makes sense -- they're a minority (especially online, but also offline at a national level), and they'd like to be protected from the majority, who they feel at this moment are persecuting/abusing them for their beliefs.

Lots of problems with that (and the irony is palpable), but I think that gets them all the protection they want without any of the Constitutional crisis.


In the US, conservatives being a minority is mostly the product of the having disproportionate representation relative to population. The nature of a two party system incentives both parties to roughly obtain 50% of political power. There is no need to compromise your platform to get more votes when you already have enough to win. If larger states had more representation, then the Republicans would shift slightly further left.


why should a minority be "protected" just because they are a minority? The only protection offered to a minority is and should be only for abuse of said minority in the past.


There are a bunch of protected classes already in US law, and I believe the reasoning tends to be that the folks in those protected classes would, if not protected, be trampled by the majorities for things the members of those protected classes have no control over. Some protected classes include Americans with disabilities, African-Americans, etc.

You can't fire members of the protected class, and (I believe) you can't ban folks from Twitter for being disabled or black.

Conservatives seem to want that treatment.


I think disability or skin-color are bad examples, religion is a better one. You can't change whether you're missing a leg (well... yet) and you can't change the color of your skin.

You can (though probably not really actively) change your political convictions the same way you can change your religious beliefs.


I wonder what happens once you can get a new leg and refuse to do so. Will the protections still apply?

This is not a morally clear situation; some people may refuse a cloned leg in the future because the price tag will be too high, others for religious reasons etc.


It's a false dichotomy to begin with.

If political belief is caused by personality traits (e.g. orderliness) and brain structure (e.g. amygdala size), is it really a choice?


True. I do not really feel to have much latitude in what I believe. I only have choice to pretend X or Y.


If you're interested in improving your ability to grow and change based on feedback, that is a skill that can be worked on.


A conservative can shrink their amygdala in response to verbal feedback? A conservative can become low-orderliness when they've been high-orderliness since childhood?

This is gay-conversion therapy territory.


I am honestly struggling to come up with a constructive response to, "People's beliefs are determined by biological factors at birth."

What you seem to have said here is, on its face, so obviously wrong, it's hard to take seriously.

I mean no disrespect, I just don't know how to continue engaging with you.


Political beliefs are caused by brain structure and personality traits, in an interaction with environment/identity/self-interest.

You saying people can just change their political beliefs when there are scientifically known biological causes of said beliefs is akin to proponents of gay-conversion therapy.

If you think personality and brain structure aren't part of the picture, then you're not aligned with current scientific knowledge - your faux outrage and strawmanning notwithstanding.


> when there are scientifically known biological causes of said beliefs

This is false, at least to the degree of accuracy, potency, and certainty as you present it here. They are "part of the picture", but one small part, not the overriding majority of how people make choices about what to believe.

> then you're not aligned with current scientific knowledge

Yes I am. You are not. You misrepresent the degree to which the things you've mentioned impact the things you claim they impact, and I'm beginning to think it's a malicious choice to do so.


  "one small part"
This is an inaccurate (or at best presumptively overconfident) portrayal of extant science.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4038932/

  "not the overriding majority"

That's another strawman. My claim is that it's a non-trivial cause, not that it's an exhaustive explanation of all the causes or that it explains an "overriding majority" of the variation.

Even if it explains just 25 percent of the variation, the assertion that people can change their political views despite that biological underpinning is akin to thinking that gay-conversion therapy can work.


I... think you need to read the abstract you just cited. It agrees with me, not you.

And I never said holistically what you claim was, so it cannot be a strawman. I simply (and accurately) said that you are overstating the degree to which biology impacts political affiliation. 25% is not even remotely accurate, and in fact laughably wrong. As your own cited paper explains:

> "The combined evidence suggests that political ideology constitutes a fundamental aspect of one’s genetically informed psychological disposition, but as Fisher proposed long ago, genetic influences on complex traits will be composed of thousands of markers of very small effects and it will require extremely large samples to have enough power in order to identify specific polymorphisms related to complex social traits.".

Additionally, specifically to your claim that what I was saying wasn't in line with current thinking in the field, again your own citation proves you wrong:

> "However, these findings have not been widely accepted or incorporated into the dominant paradigms that explain the etiology of political ideology."

You need to re-read the single study you cited. You clearly do not understand what they're claiming, and are pinning a large part of your worldview on something you don't understand.

Finally, the idea that people can change their political ideology has absolutely nothing to do with gay conversion therapy. This continued false assertion is tantamount to trolling. You know it's false, I know it's false, but you keep saying it because it's inflammatory.


  "You need to re-read the single study you cited."
No, you need to reread it, especially the introduction to the paper.

Their whole point is that social scientists need to incorporate a growing body of evidence developed outside of their field pertaining to the genetic and biological causes of political orientation.

"More than half a century of research ... has demonstrated that ... political attitudes, are influenced by genetic factors."

"We find that genetic factors account for a significant amount of the variance in individual differences"

"The results ... supports the call for a revision of the status quo"

  "25% is not even remotely accurate, and in fact laughably wrong."
Twin studies put the variation at 30-60 percent (not that I necessarily think it's that high), so it's not laughably wrong, and you are misrepresenting the science if your think that you know for sure that it's less than 25 percent.


It's an interesting concept, but the requisite examination of the field has not taken place. This is the start, not the end, and you're treating it like there's a clear and actionable conclusion as a result of this single barely-cited paper.

These people do not represent the field at large, and to believe them without additional peer review and further study is to give up on scientific objectivity, and instead just favor whatever sounds appealing to you personally. That is what you've done here.

You are wrong because of your certainty that this work is Truth. You are not on the side of science here, you're firmly in the world of abject speculation, and you furthermore promote action on information that does not agree with you, even if all of it were true (which it is not).

You have a choice here; continue to spout lies and nonsense, and continue to misrepresent what you've read, or take a step back and try to re-assess why it is you were drawn to such an inconclusive ideology to the point of promoting action.

I suspect you'll choose the latter, but you have that choice now.


My claim is actually fairly mild, it's not "abject speculation" to say that there's nontrivial biological causes behind political belief. The introduction just summarized five decades of peer reviewed research claiming to show that it plays such a role.

That this evidence hasn't (yet) been subsumed into the theories of social scientists doesn't suddenly make it invalid for me to come to the rather mild conclusion that I've come to.

What is abject speculation is when you specifically assert as fact that 25 percent variation is high. This is a positive claim requiring its own burden of proof, and which contradicts some tentative evidence from twin studies.


The only positive claim I intended on making here is that you've put all your eggs into a highly speculative and niche area of psychology that has little institutional backing, and is far from well understood right now.

Anything else I've said was to counter your own positive claims. If I wasn't clear about that, I apologize.

And while now you may have backed into a more reasonable stance, we started this conversation with you comparing "improving your ability to change and grow" to "gay conversion therapy" which is laughably wrong.


The only positive claim I've made is that there's nontrivial biological causes, for which there's a substantial and growing body of evidence. I haven't moderated or changed my claim since we started this conversation.

You can see the evidence reflected in twin studies, in personality studies, in FMRI studies, and so on. I get that Social Science TM hasn't adopted it in their theories yet (which are always going to lag empirical reality), but I'm not exactly looking for this institutional mandate before I form my worldview, either. I've done academic research in the social sciences (albeit not in psych) myself and feel satisfied enough to come to the tentative conclusion I've come to given what I've read.

Am I 100 percent certain? No, but I'm certain enough (say, 95 percent) to operate as if it is true.

Regarding the gay conversion therapy thing - I was interpreting your comment to mean that people can change their political views/orientation after being given feedback etc. If that wasn't the intention/meaning of your comment, then I'm sorry for that.


You continue to obfuscate, misrepresent, and outright lie about what you said and the claims you've made. The history is right above us, it's clear to me and anyone who is reading this what you've really done.

This conversation is pointless, you're not willing to be honest or engage with legitimacy. It is a fact that people are capable of changing their political views, your own citations demonstrate that quite clearly, and your "95 percent" certainty that they cannot is a consequence of your own failure to understand what you've read.

I am done here, you are not a reasonable person.


You're misrepresenting what I said above.

My initial claim: "political belief is caused by personality traits (e.g. orderliness) and brain structure (e.g. amygdala size)".

Which is the same as all other representations I've made. I never said this was an exhaustive list of causes, which would obviously be a ridiculous claim to make. I'm not sure why you keep asserting that I'm changing my view, maybe you can point that out if it's true?

  ""95 percent" certainty that they cannot"
It's deeply ironic that you're accusing me of misrepresentation when you're throwing out these strawmen.

I never said that I'm 95 percent certain that people can't change their political views. That's a straw man. I said I'm 95 percent certain in my conclusion that there are nontrivial biological causes to political orientation and belief.

I think you're right that we should leave it here.


That sounds like a very empty platitude.

Can a libertarian grow and change into a committed fascist who does not give a iota about personal freedom and measures everything by the interests of the State?

If so, why didn't Mussolini convert everybody in Italy into committed fascists by peaceful means? It would have been easier than any suppression by force, not to mention less dangerous.


Mussolini's behavior I can't explain, but absolutely a person can change, and many often do, in dramatic and diametric ways.

I am having a hard time engaging with this line of thinking, it's so counter to some pretty basic systems the scientific community has long considered about as close to certain as that community can come to on those topics.

Are you seriously suggesting that behavior is chiefly determined by biology, and the entire concept of personal responsibility is not accurate?


You are the one who talks about behavior. I was talking about beliefs.

There is a difference between behavior, which can be modified, even though biology plays a huge role there (see heritability of alcoholism, even in adopted children who grew up in dry homes), and belief.

I can definitely pretend that I believe in a particular deity, or Critical Race Theory for that purpose, if I am intimidated enough by my peers to be afraid to tell the truth. That is behavior and it is modifiable at will. But I cannot really start believing it sincerely.

Beliefs, especially in people over 30, are pretty fixed. There are interesting exceptions, but most people who do a 180 in their political beliefs do so in early life.


The deaf community seems like it retains its protected class status, even when their members refuse cochlear implants.


That does fit very well with the American conservative viewpoint, yes.

What are the consequences of making that acceptable, I wonder?


> What conservatives really seem to want, meanwhile, is something more like a version of the “fairness doctrine” adapted for social media. (Ignore the fact that conservatives used to insist that the fairness doctrine was an unconstitutional left-wing conspiracy to destroy talk radio).

This is not a good faith argument, and it's made worse by the fact that (as others have mentioned) Tim Wu coined the concept of net neutrality. Free speech advocates and conservatives want social media to be a dumb pipe that doesn't discriminate on content, much like other internet services. Tim is uncharitably misrepresenting their position here.


A dumb pipe within which real world law on speech applies? Or not?

If I post defamatory stuff on my website I can be sued. If I do the same on your website, you can be sued unless you fall under that 230 regulation.

So this is the choice between "social media is liable for what you say and will therefore censor you" and between "you are liable and can therefore be sued".

The option "nobody can get sued" doesn't exist, because although there is free speech, there are limits to it (which get breached on a few times a minute on big platforms). I am not saying whether that is a good thing or not, that is just the current state of affairs. And as the author stated: if you are basing what you say on the truth you might be able to say a lot more (than e.g. if you are spouting weaponized propaganda which is factualy wrong)


Wait. Are you sincerely making the argument that conservatives would like the social networks to become public utilities? (following net neutrality)

We are living in weird times if a good faith argument really involves assuming conservatives want to make a class of businesses part of the public sector. Something seems off with your reasoning, but if you're right then I'm all about it!


Take the internet fiber cable that allows a person to connect as an example. Public utilities exist in some countries to provide that. In the US a private company will usually lay the cable.

In both cases when someone signs up for a service who they voted for doesn't affect there service.

In social media's case it does change the experience

Asking for a neutral playing field is fair.

Facebook delivering that in a highly political environment where a lack of censorship scares one side is impossible.

We are heading from free speech, into right speech but we are going to end up with no speech as people tune out.


Utilities can and are run for profit, they’re regulated though and market standards are set.


Conservatives are much more interested in culture wars than economic issues.

By moving these companies from private to public sector, conservatives can use their gerrymandering and voter suppression power to control the companies, whereas under the more market capitalist approach, they follow closer to the demographic majority views


> Free speech advocates and conservatives want social media to be a dumb pipe that doesn't discriminate on content, much like other internet services.

Do that and soon every post will be about diet pills or AMAZING OFFER MAKE $$$$$ FROM YOUR COMPUTER CLICK HERE!!!!


We already have that with email and found solutions to it and the same solutions could apply (readers choose their own spam filtering). The hard problems are where something might or might not be illegal and people strongly disagree. Thats where 230 comes in and it suggests to me that platforms under its protection shouldn't censor unless directed by law enforcement .. The fact that they do is mysterious to me.


> readers choose their own spam filtering

That would surely simplify the problem - user chooses filter, then user is responsible. Then every social niche could have their own filters.

One thing I don't like about Google, FB and Twitter is that they don't tell us what content they have hidden from us. We should be able to know what was filtered out so we can re-rank with our own rules.

Of course that would not sit right with the web giants and politicians because they can't control anymore how our news get ranked and filtered.

If there's one outcome I wish to see from the new anti-trust push is to force them to open up the front-end and allow competing UIs and competing filters on their platforms. Why should a few people decide how information is accessed for the whole world by virtue of being the winners of a natural monopoly?


Behind the scenes email providers blackhole a huge amount of traffic that never touches even the spam inbox of their users. This is of course good, because it is what the users want. Social media companies likewise curate what content they allow to remain up because they want to please their users.


How do readers choose their own spam filters within services like Gmail et al? Honest question.


Spam filtering seems like a similar moderation issue?

People want a right to an audience, and spam filters limit that right


Not quite.

Spam filters are controlled by, and acting on behalf of, end users who would be receiving the spam. When people are unhappy about their spam filtering, they can adjust or turn off these filters, or migrate to another e-mail provider (keeping all the e-mails, and ability to communicate with the same people).

The issue with modern social media, they implement censorship no one asked for, in a completely opaque way, and don't even support user migration.


Nearly all useful communities make use of shared moderation to select context that is desired rather than merely filter out obvious spam.

It's not obvious that you could reproduce reddit or hacker news for example with a dumb pipe and user filters.

Maybe step one is proving that is even possible before insisting people do it.


When you have 2.8 billion people using a web app, that's way too many people to be a community, let alone a useful one.

Facebook or Twitter as a whole is not a community. You can build actual community on top of them if you want. I have no objections against moderators of these groups doing their moderation, if I don't like your moderation I can always leave your groups or unfriend you.

The problem is, Facebook and Twitter themselves are censoring content. Apparently, FB even censors private messages with political views they don't like.


Give me some examples of things that are being blocked that ought not to be? Lay of the generalities unless you feel up to defending the kind of material being banned.

I don't think Facebook ought to be required to provide a platform for people it doesn't want to. It is an absolute violation of their rights to force them to use their private property to promote beliefs they find abhorrent.

For example I don't think neo nazis, anti vaxxers, or election truthers need to have a more efficient way to spread their poisonous lies. I don't think Facebook ought to be limited to throwing up their hands and suggesting individual communities that don't want to hear about the next final solution simply don't attend to their hate.

The solution to undesirable speech isn't merely more speech when the undesirable speech is being used to plan the overthrow of democracy and the murder of their enemies because eventually you wont have the privilege of speaking against them.


> Give me some examples of things that are being blocked that ought not to be?

Last URL I encountered was this, check the comments: https://avoiceformen.com/featured/my-son-doesnt-want-to-be-a...

> It is an absolute violation of their rights to force them to use their private property to promote beliefs they find abhorrent.

If FB is that intolerant to other people's opinions, they should do something else instead of being a social network 35% of global population uses at least every month.

My cell phone allows me to discuss anything using the property of the operator. My internet provider doesn't care what I do with their property as long as I don't break laws and pay bills. I don't see what makes social media so special that they're allowed to arbitrarily censor opinions on the internets in centralized manner. Especially in private messages. Especially after doing everything they possibly can to make sure there's no competition on the market.


Your link is full of hateful lies. It tells people that their transgender kid is really just a confused youngster who was somehow convinced by hucksters that he ought to whack his penis off for no reason which encourages parents to fix their kid by deprogramming them before its too late. This is exactly the attitude that leads to the massive suicide rate among transgendered teens. The world view it represents is basically a hallucination.

They are allowed to censor stuff on their networks because you are ultimately using their property and do so under terms set by owners of that property. This is completely trivial to understand. No law protects your ability to use that property as you please and until you get sufficient support from the general populace and the legislature none shall.

The dumb pipe that everyone is allowed to use as they please is the internet. This is more than sufficient. You don't need to have freedom to use facebook as you please in order to have reasonable freedom of expression. You can have your own website and express your opinion therein.


> Your link is full of hateful lies.

In my opinion, that link expresses a humble opinion of a middle-aged woman. I don't necessarily agree with her opinion, but I'm certain there's nothing hateful there. Also, I'm not certain but inclined to believe the OP is sincere therefore whetever's written is not a lie.

Apparently you have different opinion. That's fine. What is not fine, Facebook suppressing opinions they don't like.

I grew up in a communist country. You don't want a society where you only allowed to express one opinion, the official one, and go to jail or a psychiatric hospital for expressing disagreement with that officially blessed opinion.

> you are ultimately using their property and do so under terms set by owners

Same arguments apply to phone networks.

> until you get sufficient support from the general populace

Given what FB/twitter have been doing lately, that support won't take too long to build.


It's hateful and it's child abuse. There's nothing humble about parents who hate their LGBT+ children.


I disagree with your interpretation. Parents hating their kids don't care what's happening to them, the OP obviously cares, otherwise she wouldn't be writing that article. As for LGBT+, that's unlikely as well, here's a quote: "he isn’t gay: in fact, he has a girlfriend".

I'm not a child psychologist, I don't know these people, and I don't have a strong opinion on that particular topic.

But I do have a strong opinion on the following. I don't want Facebook, or anyone at all for that matter, to globally police content on the internets. With great power comes great responsibility. Social media have been abusing their power for years, yet they bear zero responsibility, lacking transparency, and implemented no ways to appeal.


If you don't understand the language it's easy to see how you miss it, but stuff like this is pure hate. It's based on lies. It's pushing a narrative ("my kid isn't really trans, it's just a fad pushed on him by the trans agenda") when the reality is that kid is too scared to come out to their parents; trans healthcare is a fucking joke; and society hates trans people and spends considerable time and money making their lives harder.

> adults on the internet were grooming vulnerable kids

This combines a decades old trope (LGBT+ people are predatory child abusers) with a more modern attempt to deny access to trans healthcare (trans kids are mentally ill and lack capacity to make decisions). The article develops that mentally ill point here:

> A lot of the young men calling themselves transgender have autism, ADHD, OCD or Asperger syndrome. Parents have sometimes known about these conditions for many years. But gender clinics aren’t interested in pursuing therapies which might actually help these kids understand why they feel the way they do. It’s the only field of medicine where you’re not allowed to talk about comorbidities or other treatments. This is the medical scandal of the century.

...and it combines it with a lie about treatment. Care for transgender children very much spends time with psychological assessment to question whether the child actually is trans.

She goes on to lie about affirmation:

> there’s nothing but affirmation

Here she's making the claim that affirmation is a pathway to puberty blockers, then cross sex hormones and surgery. That's untrue. An important part of healthcare is to explore fully the child's ideas around gender, and the way you do that is with affirmation -- they then trust the healthcare team and open up about what they think and why they think it.

> I have spent all of the lockdown researching transgenderism

Here's a handy hint: anyone using the term transgenderism is an anti-trans bigot.

All the anti-trans people posting links to HN can never post something from WHO or CDC or WPATH or American Academy of Pediatrics -- it's always some blog post that's unsourced to anything credible and full of disinfo.


There is a difference between a communist autocracy where the parties with political and military power get to define what the right opinion is and a free market of ideas where people are allowed to express all sorts of ideas including hateful ones but where others are free to disassociate themselves from you and not to promote your ideas.

The difference between phone and internet systems is that they represent a limited number of points of contact between yourself and the entire rest of the world its important that they remain impartial in the same way that we don't want your local and only power company deciding whether your shop can exist.

The broader internet is inherently diverse enough to allow virtually everyone who isn't actually directly violating the same laws as they exist within to exist. We don't need facebook as a common carrier to achieve this.

Lets contextualize this shall we. Conservatives of the former average bent talking about fiscal responsibility, importance of gun rights, support of actual family values have zero problem operating in the prior and evolving social media context.

New school conservatives that flirt with hate and violence about as openly as David Duke or who promote actively harmful and hateful content are seeing their ability to do so curtailed after sharing their hate resulted in a number of people being murdered/harmed.

The thing is that this conservative demographic much as it has become more visible and more energetic because it is shrinking not because it is growing. In 20 years it will be limping. In 40 it will be virtually extinct.


I don't want deplorables to be able to spread poison easier. A dumb pipe with user side filtering wouldn't allow major platforms to deny neo nazis, anti vaxxers, and cults a platform.


Not really a problem. All that means is the platform gets designed differently - such as actually giving users control of what accounts/content they are shown.


>Stated differently, some liberals seem to have the fantasy that potential civil liability would finally force platforms to do more about disinformation on their sites — “to take responsibility.” But what does that mean? Because whatever the moral responsibility may be, there isn’t actually any legal repercussions for republishing, or publishing, crazy propaganda and conspiracy theories. If so, Newsmax and Gateway Pundit and even Fox News would not exist.

He's completely wrong here. No one really cares that Brietbart publishes lies to their readers on their own site. What people are upset is that facebook, twitter, reddit, etc allow propaganda and lies to be published on their site for their users.

If The_donald wants to make a site and push bullshit, go ahead, people that only want to see The_donald can read it, but coordinating so that hundreds of millions of redditors see bullshit and spread bullshit is ridiculous.


>If The_donald wants to make a site and push bullshit, go ahead, people that only want to see The_donald can read it, but coordinating so that hundreds of millions of redditors see bullshit and spread bullshit is ridiculous.

How would that work? If reddit loses its 230 protection, then this The_donald-only reddit alternative would also lose its 230 protection.

Additionally, I'm concerned whether reddit could exist at all. Would every single comment need to be human-moderated before it's posted? That doesn't seem feasible, and seems likely to kill reddit.


The threat of revoking someone's 230 protection would ideally make them think about everything you just said, and hopefully decide to abide by the conditions of that protection. That is, they get to choose for themselves: either support free speech on their platform and receive the benefits of that protection, or don't. If they don't, then they'll need to figure out how to address the kinds of problems you mentioned.


>The threat of revoking someone's 230 protection would ideally make them think about everything you just said

By someone you mean the site owner? I don't know what you mean by "support free speech". Does it mean every non-illegal post must be allowed? There will probably be a ton of spam. Possibly porn in every subreddit. The entire idea of subreddits will die, because moderators of individual subreddits will no longer be able to moderate. There can no longer be rules about what is on-topic vs off-topic for a subreddit.

>If they don't, then they'll need to figure out how to address the kinds of problems you mentioned.

I don't think they can be addressed.


Why should we force twitter to carry your speech instead of asking you to get your own site?


Okay, but what does any of that have to do with Section 230? How would repealing it have any effect on how Facebook treats content on their site vs. how Brietbart does it?


Here's an interesting question: would linking to a Breitbart article from Facebook be risky for Facebook if 230 was repealed? That is, we can probably agree that summarizing a Breitbart article into a Facebook post would be risky for Facebook, but a simple link might be fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: