Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Justice Department to propose limiting internet firms’ Section 230 protections (wsj.com)
132 points by _sfvd on June 17, 2020 | hide | past | favorite | 189 comments


I'm generally sympathetic to the idea that Section 230 protections should come with some sort of obligation to allow free speech.

However, the actual policy proposals for replacing Section 230 are all outright dystopian. Josh Hawley, in particular, is NOT a free speech advocate. His problem with Facebook/Tiwtter is perceived liberal bias, and the alternatives to Section 230 that he suggests are 100% about wrestling editorial oversight away from one class (tech CEOs) and then giving it to another (a politically-appointed board).

Does anyone have a good proposal for how to go about reforming Section 230 in a way that's workable and values free speech?


>Section 230 protections should come with some sort of obligation to allow free speech. [...] Does anyone have a good proposal [...] and values free speech?

Nobody has a good proposal because every discussion about the idealism of "values free speech" is always hiding the true difficulty: nobody wants to be forced to pay for others' undesirable speech.

E.g. Youtube can't be a "free speech" platform because advertisers have free will and can choose to not pay for it. (Previous comment about Adpocalypse: https://news.ycombinator.com/item?id=23259087)

Always mentally translate "create a website that allows free speech" into "create a website that forces others to always pay for undesirable speech they don't agree with" -- and you will see that's a virtually impossible dream to accomplish. There is no broadcasting medium (including websites) in any country that doesn't have interference and pressure to remove/ban content via consumer boycotts, advertisers, subscribers, business judgement, or government officials.

Websites have the hard reality of requiring cpu/disk/bandwidth and they all cost money and that's the lever used by others that keeps "absolute free speech" from getting realistically implemented.


This is close, but misses the mark slightly I think. The cpu/disk/bandwidth to store and serve text are so small as to be irrelevant. I don't think it's a cost issue.

The issue is one of association. There are strong social forces that punish association with any distasteful speech. The association taints everything (and everyone) it touches, and the liability in the form of negative blowback can grow far beyond whatever costs were involved in actually serving the content.

Even if some set of individuals were willing to donate all the hosting costs of the distasteful speech, there would be strong social pressure for hosting platforms not to accept the money.


>I don't think it's a cost issue. The issue is one of association. There are strong social forces that punish association with any distasteful speech.

Yes, association is also an issue so there are at least 2 forces happening: cost and/or association.

Since I used the word "undesirable" and you used the word "distasteful", I believe we're thinking of 2 different scenarios:

(1) inconvenient/controversial content like politics or alternative COVID theories

(2) vile or obscene content like beheadings or adult porn

My perception is that the censorship topics I see getting press is more of (1) than (2) and the levers putting pressure on the money trail is the primary weapon for (1).

E.g. supporting Hong Kong protests are not category (2) vile/obscene (maybe your "distasteful"?) but nevertheless, Apple removed podcasts with that subject matter from the App Store to appease China[1].

Even with Apple's billions in its war chest, Tim Cook did not say "China can fuck off -- we're keeping the podcasts because our App Store is all about free speech!". That didn't happen because Apple wants to sell smartphones in China and so they will cooperate with China's limits on "free speech".

[1] https://www.theguardian.com/technology/2020/jun/12/apple-rem...


>The cpu/disk/bandwidth to store and serve text are so small as to be irrelevant.

Just wondering, in your view, if these costs are so "small", who pays them when advertisers abandon the website? Where does the money come from to cover these costs? (Small as they are.)

Full Disclosure: My own belief is that IRL these costs, especially for something at the scale of YouTube, are not likely to be terribly "small" at all. I seriously doubt most organizations could countenance such costs with no return on that investment.


The revenue from advertisers is supporting both controversial and non-controversial content.

If advertisers completely pull their ads off a website even though only 1% (say) of the content they were sponsoring is actually controversial, then the blowback has cost the platform 100x more than what they were paying to host the controversial content.

I think hosting costs can be significant overall, yes. But I think the marginal hosting cost of allowing controversial content is not significant.

It's only a significant cost when it impacts the revenue stream for the bulk of what the website is publishing.


But if there is no way to remove content because of mandatory free speech, then the controversial content goes to 99%. No advertiser will pay for ads alongside a torrent of profanity and porn. It just won't happen. (Well, porn sites might? But no one else.)

Not to mention the fact that the sites could not stop advertisers from posting ads on their site in any case. (Since it would be illegal to remove content. Free speech and all that.) So why would I pay that 8 figure yearly sum to you that the big advertisers are paying today, when I can pay not even a million to a spam farm to post my ads as standard comments that you are forbidden from removing? And it's completely legal.

I just think you're being a tad idealist. Spam farms exist. Botnets exist. Pedophiles, porn stars, klansmen, all these exist. This stuff would be the majority of content, not 1% of content. Spam alone would overwhelm interesting content, and that's before you even throw in the porn, pedophilia, and klan rallies.


Is it really the case that advertisers won't pay for it though?

I get that many advertisers won't, but even companies who do want to advertise on controversial content don't really get the choice to do so, since platforms seem more prone to flat out removing/banning said content rather than putting it behind a 'controversial' flag and letting advertisers opt in/out of advertising on it.

These sites already have systems to mark what kind of content something is, and advertisers can already choose to market on content in some categories and not others. So it'd seem like if there are companies willing to pay for such speech, they should be allowed to.


Yeah, I used to think of it like this.

In particular, my view was basically: "don't like FB/YT/Twitter policies? Spin up a wordpress; it's as easy as posting to FB/YT and you can pay the monthly bill for a quite decent audience using loose change."

I've come around in the past couple of years. Social networks/content platforms are... well, networks and platforms. Like it or not, the policies of the largest networks/platforms will have a non-trivial impact on public opinion. They have become a (perhaps the) public square. And the government doesn't have to extend Section 230 protections to those networks/platforms.

I really like the idea that individual users should be able to create their own content filters and buy/sell content filters. At least in the abstract, this seems like it would address the need for content moderation without centralizing the censorship.


> Always mentally translate "create a website that allows free speech" into "create a website that forces others to always pay for undesirable speech they don't agree with" -- and you will see that's a virtually impossible dream to accomplish. There is no broadcasting medium (including websites) in any country that doesn't have interference and pressure to remove/ban content via consumer boycotts, advertisers, subscribers, business judgement, or government officials.

> Websites have the hard reality of requiring cpu/disk/bandwidth and they all cost money and that's the lever used by others that keeps "absolute free speech" from getting realistically implemented.

There seems to be a blind spot here in the idea that "websites" have to be big monolithic platforms that give everyone a megaphone.

"Websites" where you can say whatever you want are and have been cheap, and there have been famous examples of this for decades (Timecube!).

But expecting to get access to someone else's megaphone is a very different question. Recently it's been mediated by "engagement" which is a socially terrible base metric, editorially - it encourages the most ridiculous, provocative thing. But this is still a choice, not just some technological inevitability or "correct" ideal state. Big platforms will always necessarily do some sort of curation.

Putting the government in charge of that curation seems silly, since the real cost of bypassing the platforms is so low. Yeah, you have to earn the eyeballs then, instead of piggybacking on other people's shit, but is that so bad?

It's like saying "people shouldn't make independent movies anymore, we're just gonna have the government review all the scripts the big studios take on and make them take some they normally wouldn't."


thats not true. there are people who would be happy to advertise on the federalist or conservative content, but youtube/google bans it anyway because they are ideologues.

how hard would it to be to allow them to match with advertisers who specifically want to be on that type of content?


Make it all dumb pipes and make users responsible for regulating what they see/hear. Make a market for filtering content, one great filter across a platform is not flexible.

Require platforms over a certain size to provide real-time data accessibility across platforms. Facebook and Twitter are monopolies by virtue of market position, anyone can build a platform that is functionally the same. Create competition here.


This sounds like an excellent idea, but I'm worried about architecture/execution/implementation. Generally successful regulations set goals but don't prescribe how those goals should be met. There doesn't seem to be a super clean separation between "what" and "how" here.


Implementations exist: Microsoft provides parental control filters, OpenDNS has community-driven content classification.


Right. Building it is definitely not an Unsolved Problem, just like building a healthcare exchange website is not an Unsolved Problem. The problem is organizational and political, not technical.


> Make it all dumb pipes and make users responsible for regulating what they see/hear.

Just like the good old days, until somebody "thought of the children".


If you're looking for an alternative take, check out some of Cory Doctorow's writing on this. His position is that forcing platform neutrality is less important when platforms don't have a monopoly over communication.

Different people have come up with different plans about how you could address tech monopolies, with varying degrees of extremity:

- Splitting up companies that control entire vertical slices of a market. Warren in particular was campaigning pretty hard on this, especially in regards to Amazon/Apple app stores.

- Forcing companies to allow data exports by consumers, and specifically to allow automated data exports. For example, Facebook would need to allow you to access an API to pull your data, so you could plug that API into a competitor instead of manually downloading everything.

- Weakening Computer Fraud and Abuse laws around site scraping and adversarial interoperability.

- Adding additional exceptions to the DMCA around interoperability. For example, allowing companies to break Kindle DRM for the purpose of moving books to a competing service if Amazon didn't provide a way for them to migrate books on its own.

- Forcing certain data formats to be standardized, or requiring standardized API layers on top of services.

There's a lot of debate in those areas about how far is too far, and what counts as a natural monopoly, and what negative side effects might exist for particular strategies. But, the thread running through all of them is that Section 230 is fine, awesome even. There's no need to get rid of it, 99% of the time we want moderation on most of our platforms.

Platform censorship is really only a problem when consumers don't have the ability to easily switch platforms/hosts, and in that case we should break the monopolies, not the Right to Filter[0]. You see people complain about censorship on Twitter, you don't see as many people complain about censorship on Mastodon, because on Mastodon you can set up your own server if you really need to. One of the biggest points of federated services is to allow communities to choose how aggressive they want to be about moderation.

[0]: https://anewdigitalmanifesto.com/#right-to-filter


Thanks for curating all of these proposals!

- splitting up: seems like a temporary fix at best (see ma bell).

- data exports: exporting is nice, but... then what? the network is still a network.

- weakening CFAA/DMCA and allowing scraping/interop: It'd be a terrible hacky world, but I could imagine it working. Probably would end up looking like a weird inverted version of the Wuph! bit from The Office. https://en.wikipedia.org/wiki/WUPHF.com So not a good solution but maybe actually a solution. Plus we should do this anyways.

- standardized API: I like this combined with the sibling proposal of allowing people to build their own filters. I think that's my new position unless someone can convince me otherwise :)


I don't have a good proposal really, but I agree that "politically appointed board" is exactly the worst thing we could have. That's the point where true free speech advocates will have suffered total defeat.


"Sure I believe in free speech, but you can't let that guy say those things."


Not necessarily, a government board should only do law enforcement. You should at least get your day in court.


Define free speech in a way that allows a platform to ban offensive content, while requiring them to publish all content.

Also this is contextually a clear retaliation for speech that the government does not like, and their arguments are pretextual.

But also if a platform loses 230 protectIon if it restrict political opinions then sites would need to leave racist and homophobic comments up, personal attacks against the authors, etc. Because if they lose 230 protection they become directly liable for content on their site if the filter any of it.

That was the whole point of section 230 - sites have a legitimate reason to want to stop arbitrary content being hosted by them, but they only had the “i’m just a dumb pipe” defense as long as they left everything up. Preventing that is literally the reason section 230 exists.

But here we have a president who doesn’t like one platform’s content moderation policies, as has decided to rewrite the law in order to make that moderation illegal.

It is clearly retaliatory, and it is clearly with the intent of restricting the speech of those entities.


> some sort of obligation to allow free speech

Isn’t that the opposite of what the text of the law says? Doesn’t it provide for protection when moderate content that “one may find objectionable”, which could basically be anything.


The text of the law is a mess: it gives content providers the ability to define whatever moderation policy they want, but also gives objectors the right to sue for punitive damages in the event that they disagree with how it is applied, with their claims being judged by an whether they uphold undefined 'fair dealing standards'.

It's not so much an attempt to defend free speech as to bury the affected companies in litigation if their moderation policies aren't either nonexistent or up front and aggressive: exactly the situation Section 230 was written to avoid. It's just in this case the lawsuits will come from the parties seeking to cause offence rather than the offended.


I can see how it could be read that way, but are content providers really buried in litigation in practice? I actually don't have much insight here personally, so I'm quite curious.


No because have section 230 protections in the first place. They can arbitrarily start playing Mao and banning people for arbitrady undisclosed reasons and it would be legal. That is a major Chesterton's fence.


To me, the question is whether the web is a something people use through a middleman, i.e., someone else's website like Mark Zuckerberg's, versus a thing that we use directly, i.e., having our own websites. If we follow the later thinking, then of course we are personally responsible for what content we place on the website.

In either case, the web in its design is still a "public place" where "free speech" can occur, where anyone who is connected to the internet has the potential (setting aside issues of state censorship) to communicate, via a public website, anything to anyone, anywhere in the world.

Section 230 was reputed to be passed in response to a lawsuit against Prodigy, an online subscription service, which technically, IMO, was not the same as the emerging "web". IMO, services like Prodigy, Compuserve, America Online, etc. were walled gardens that could exist outside of the web. Rightly or wrongly, I always viewed Section 230 as protecting ISP's from litigation arising out of the content people included on their websites, not as protecting websites from litigation arising out of the publication of the content. It is up to the website owner to remove offending content, not the ISP to block access to it. This makes practical sense. We wanted ISPs to stay in businesss.

As crazy as it may seem to consider messing with Section 230, there is certainly an argument that the protection it affords has been usurped in ways never anticipated, by enormous "communal" websites larger than anyone could have imagined. When someone's website has billions of pages, comprising submissions from the general public, it becomes impractical to remove offending content. I doubt Section 230 was intended to address this problem, to keep a small number of individual websites in business and ensure the creation of a small number of advertising services billionaires.


> It is up to the website owner to remove offending content, not the ISP to block access to it. This makes practical sense. We wanted ISPs to stay in businesss.

Yeah, it's interesting. That's how everyone thought about it back in the day. Section 230 was about ISPs, not phpBBs.

I remember actually worrying about this when setting up my own forums. Even talked to a lawyer and included CYA language in the EULA. I would also use this as a justification for bans ("I'm possibly on the hook for his bullshit, which might be called harassment by some local pd, because the law is new and who the hell knows what will happen").

Thanks for the reminder. That's really interesting, and it's also really interesting that I didn't even remember this.


The whole point of Section 230 is to allow digital communications services to moderate their platforms without incurring liability for the things their users say. If you want to stop the moderation, all you would need to do is completely repeal Section 230- as it no longer serves any purpose under such a system.


Without Section 230, if I host a unmoderated social network, could I face liability if one person libeled another using my website? My understanding was yes, but your comment suggests otherwise.


If you don't know about it, then no, according to https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc. (This is one of the cases that spurred Section 230; the analysis is, I believe, still valid today for sites that don't moderate things, because the point of Section 230 was to provide equivalent protection for sites that do moderate things. But it's definitely valid if Section 230 goes away.)


I think it's the opposite. If you don't moderate you are not responsible. If you moderate, you undertake liability.


This is not really true, in fact, in my opinion, it is the opposite of this. Services are given immunity if they don't moderate their content. Once they moderate it, they lose the protections. Facebook want to moderate content and receive immunity, and that is the crux of the problem.


You're mixing up section 230 with the situation prior to section 230. There were two important cases prior to its passage:

- Stratton Oakmont, Inc. v. Prodigy Services Co., in which Prodigy was found to be liable due to their content moderation, and

- Cubby, Inc. v. CompuServe Inc., in which CompuServe was held not liable for content, as they were unaware of it

Section 230 was in fact created to change this - to allow companies to moderate without making them liable for all of the actions of their users.


I like HN’s approach and wish more platforms would follow a similar format. As far as I know, nothing is ever “removed” from the site - it’s just greyed out or hidden by default. Anyone who wants to read the bothersome comments can flip the switch to see them but no one can reply to them which seems like a really effective approach to me.

If a “censored” tweet couldn’t be shared/retweeted/replied to but was still available for anyone who wanted to seek it out then the idea (however distasteful) hasn’t been censored strictly speaking but it also hasn’t been amplified. I’d prefer a compromise that leaves control over acceptable content in the hands of the platform owner or the users rather than the government.


In this comment thread you can find at least one comment removed. Search for the text "[flagged]".


If you have showdead on you can still read the flagged comments, without even needing to click through like a Twitter content warning. I'm sure mods can permanently delete stuff where absolutely necessary, but generally seem to leave it even when it's obvious spam or highly offensive. I guess the big difference between this and social platforms is that nobody comes to HN to show off their edgy credentials to their adoring fan base or organize harassment campaigns via the platform.


I see you chose to attack the person, not the proposal.

You are wrong. The bill does not designate a political board, it requires tech companies that have over $30 million U.S. users per month and an annual income of over $1.5 billion, to publish all of their content moderation policies. Users who charge that the companies are not implementing content moderation policies fairly would be able to sue for $5,000 plus attorney fees.

I think it's reasonable for these social media behemoths to post their mod logs.

I'd even like to see sites like HNs do it. Lobsters does: https://lobste.rs/moderations

If you have a specific gripe with this, let's discuss the legal.

I really don't see how GP is currently top comment.

Forcing giant social media companies to publish their content moderation is transferring power from the tech ELITE to the public. No political committee is in charge, the company will be forced to be published their logs, the courts can be used when users think companies are still acting in bad faith and not properly publishing their moderation logs.

PSA: READ THE BILL, IT'S SIX PAGES!!!

https://www.hawley.senate.gov/sites/default/files/2020-06/Li...


The published text doesn't require companies to publish logs, it requires them to publish a policy [something Twitter and Facebook already do to some extent] and then allows vexatious litigants to sue for $5k in imaginary damages if they disagree with how the policy is applied.

This isn't transferring power from the tech elite to the public, it's making trolling the new patent trolling.


Under that interpretation, companies will certainly want to keep a public moderation log for court documentation then.

Being transparent as possible will be best for the user and the courts to determine whether or not there is selective bias in moderation.


Section 3.

Here's Josh's own description of the legislative intent: "Big tech companies would have to prove to the FTC by clear and convincing evidence that their algorithms and content-removal practices are politically neutral. The FTC could not certify big tech companies for immunity except by a supermajority vote"

If the FTC has the authority that Josh wants it to have then it will 100% be politically weaponized by whoever controls the white house at the time of passage (so, Trump, because it'll only pass if R's sweep in 2021). IMO it's quite naive to think otherwise. In general, but also specifically with respect to Trump.

But, assume Trump is this amazingly neutral and high-minded person uninterested in using political power to shape social media narratives. Okay. I have a PhD in machine learning, have tons of experience designing and deploying systems, and I'm pretty up to date on all of the fairness literature. I have No. Fucking. Clue. how I would convince even myself that a content moderation algorithm is "politically neutral".

Even with a clean spec, this seems hard because content moderation algorithms are huge and complex. Wasn't it just a few years ago that a bug in Java's sorting algorithm was found by trying to certify its correctness? Like, bugs live in freaking sorting algorithms of the most popular languages for years and years. Even with a ridiculously clean spec, ...

...and the spec here isn't nearly as clean as "sort the list". The question of what "politically neutral" even means is extraordinarily political. So even if the FTC wasn't explicitly weaponized -- and, dear god, it will be, because the counter-factural here is insane -- the judgements here will still be implicitly political because the spec ("politically neutral") is inherently political.

Proving that a hugely complex ML algorithm is fair simply won't be some sort of apolitical mathematical exercise.

Also, note well: I wasn't even referring to the Ending Support for Internet Censorship Act specifically. But I don't really want to start a debate around this point because that's all beside the point.

I'm curious. What do you think of the proposal made in this thread that people should be able to use their own filters and the big tech cos should be required to implement a clean api for enabling third party filters?

That seems like it solves the "some people don't want to see X" problem in a pretty politically neutral way, but also in a way that acknowledges the difference between the walled-garden network effects web of 2020 and the more decentralized web of 2005.

Seems strictly superior to Josh's proposal of creating a huge incentive to politically weaponize the FTC and giving that almost certainly weaponized body broad authority.

If you think that the "user-chosen filters" solution is not better than Josh's proposal, I'm really interested to hear why.


I'm not sure I agree with that framing of the relationship between Section 230 and free speech.

For reference, here's the law:

> (1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

> (2) No provider or user of an interactive computer service shall be held liable on account of— (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

Section 230 was written to solve a very specific problem: Prodigy tried to moderate content on their site, and when someone posted libelous content and they didn't remove it, Prodigy was held legally responsible. CompuServe did not moderate content, and when someone posted libelous content, CompuServe was not held legally responsible. There was a perception that this was a counterintuitive result, and so Section 230 patched over it.

This has nothing to do with the ideological content of the communications. The messages in both cases were already unlawful because they were libelous - the question is whether CompuServe and Prodigy bore any liability (i.e., any obligation to not republish it), or just the end user.

Also, as written, Section 230 does not create an obligation to do anything. You don't have to moderate obscene, lewd, etc. content. You can choose not to moderate anything. The law simply says, 1, you the website operator aren't responsible for what people post, and 2, you don't gain any additional liability if you choose to moderate these things. It doesn't create any liability for not moderating them. The perception (which seems to have been empirically correct) is that Prodigy's approach would be more popular in the market than CompuServe's, and so the law should not create a legal incentive to act like CompuServe. The new law simply removed that incentive; it did not create a legal incentive to act like Prodigy.

The results of the two cases are only counterintuitive if you believe it is good for society for service providers to proactively moderate speech that is already illegal and err on the side of over-moderating. I don't think that belief is easy to reconcile with a strong pro-free-speech view - you're trusting a platform to be making decisions that would otherwise be made by courts, and you don't have nearly the representation/recourse/etc. you do with the legal system, if they decide to moderate you.

In particular, adding an obligation to protect free speech means that providers can only moderate content if they're confident it would result in legal liability. If they're not sure (suppose that, to pick a recent example, someone says that J. K. Rowling "cannot be trusted around children" - is this libelous, or a constitutionally-protected opinion?), they should err on the side of not moderating. But that matches the status quo ante Section 230. If you think that forums should err on the side of under-moderating, then it was perfectly fine to be in the legal situation where Prodigy's approach was riskier than CompuServe's.

Note also that neither of these scenarios does anything to discourage people from running forums where they tightly control what is said (ideologically or otherwise). If I want to host a personal blog with only my own posts, I can do that today, I could do that before Section 230, and I can do that essentially regardless of anyone's proposals (because I have a First Amendment right to say what I want and only what I want). If I want to invite my friends and only my friends to comment, I can do that too. If I want to invite the entire world to comment and I screen comments before posting, I can do that too (I also have a First Amendment right to free association). I'm still liable for unlawful posts (from libel to copyright infringement to whatever else), but if I'm willing to tightly moderate content, that's okay.

Another pro-free-speech opinion here, by the way, is that the real problem is with libel laws, and neither CompuServe nor Prodigy should have been held liable because the speech shouldn't have been illegal in the first place. This is entirely orthogonal to the "free speech" concern of perceived ideological bias.

It's only in the weird intersection of all of these things that the framing of Section 230 and ideological bias seems to make sense - you'd have to take the anti-free-speech view that ruinous penalties for libel are good, and then carve out an anti-free-speech exception that says that if you choose not to exercise your right to say what you want or associate with who you want, libel laws don't apply to you. And then, somehow, the two anti-free-speech approaches cancel out and turn into a free speech view - platforms are obligated to be non-ideologically-biased (in a sense defined by the government) for fear of arbitrary civil penalties.

(By the way, any free-speech reform to Section 230 really should start with repealing 230(e)(5), where FOSTA/SESTA partially removed Section 230's protections so that platforms became responsible for messages posted by users about "the promotion or facilitation of prostitution.")


Yeah, I agree with a lot of that. Especially the point that creating an obligation to protect free speech is, even if a nice value, pragmatically totally impossible to implement. That's sort of what I was trying to say in my original post (poorly I guess): "sure, maybe free speech would be nice, but honestly how? all the medicine seems worse than the cure and this is a sort of fundamentally thorny problem"

What do you think of the proposal that, to keep section 230, websites with a large audience must implement a standardized api and then folks would be allowed to create their own content filters on top of that api?

Thanks for your post.


It's not solely that it's pragmatically impossible - it's that IMO it's not a free speech value. The government should not be in the business of telling private entities, regardless of size, that they are obligated to republish speech they don't agree with. Doing so may be a very important social/cultural norm but it shouldn't be a legal one.

(I think you can carve out a reasonable exception for the fairness doctrine and the equal-time rule for broadcast radio/TV based on the fact that spectrum is limited, and even imperfect attempts by the government at ensuring fair allocation of spectrum are better than none. But the internet is not limited in the same way; anyone can start a discussion forum without a resource allocation from the government. For the same reason, newspapers and magazines don't have anything like the fairness doctrine and never did - for about as long s there have been newspapers, anyone could start a new, competing newspaper, so there was little need to make a rule that everyone had the right to get their articles published in the local newspaper.)

Re a standardized API for keeping Section 230 - I still maintain that ideological neutrality is completely unrelated to Section 230, which is about directing the liability for speech that's already illegal.

Proposals to make Section 230 related to ideological neutrality are about weaponizing the threat of people making illegal speech to coerce websites to do things. I think that's a lot worse as a matter of policy than directly telling the websites what to do, if that's your actual goal.

Here's a thought experiment: suppose you have a group of 1000 honorable people who would never post libel/threats/copyright infringement/whatever. If I run a web forum that's restricted to these people, nothing about Section 230 can impact me, because they're never going to do anything that will incur legal liability for themselves or me. If 500 of those people are pro-abortion-rights and 500 are anti-abortion-rights and I restrict the forum to one of those subsets, that doesn't change the analysis - I'm still not going to be affected.

The only way Section 230 becomes relevant is if a couple of those people are dishonorable (and also boneheaded) and want to post illegal speech. Then they incur liability for themselves, of course, but if I lose Section 230 protections and I fail to moderate their speech, I also incur liability.

But the ability of those people to post illegal speech on my forum is clearly not a public policy goal - their speech is already illegal. Sure, there will always be a few such people in the world, but the law has, until now, taken the opinion that people shouldn't do that. Adding a new law that relies on people continuing this illegal behavior for it to have the right incentive seems like a poor plan: it is a complicated weapon and likely to work poorly in practice too.

If you want to make a rule that large websites cannot operate at all unless they are content-neutral in some definition, do that instead of merely making them subject to legal risk. But then you have to figure out exactly how setting up those rules is compatible with the right of private entities to engage in free speech and association. (And I think having to figure that out is a good thing.)


Just change the wording to illegal speech instead of vague definition like indecent speech.


Use the First Amendment standard, which is basically anything but obscenity and threats of imminent violence.


That's unreasonable. Without moderation you'd have a 100 to 1 ratio of spam to good content. Platforms should be able to control content in the way they see fit for their platform.


Banning spam might be possible without giving platforms the power to make their own judgements about the truthfulness or decency of the content they host.

If 90% of (a random subset of) users agree that a given piece of content is spam, the platform should be entitled to delete the content. The company would then be allowed to ban a user after a certain number of strikes, possibly subject to an appeals process where a human employee checks that this isn't a case of a minority viewpoint being unfairly silenced by false reports.

This would democratise these platforms, and only give companies the discretion to allow more content than their users are interested in, rather than less.


It seems that could easily be weaponized to remove minority opinions.


I would rather hope it kills the big platforms and forces a reverting to smaller platforms and message boards. Social media has become a scourge on humanity.


Should the government be in the business of regulating social media to death because it is a scourge on humanity?

I am generally in favor of big government, but this seems too big even for me. (And if I were to be in favor of it, I would rather the government straightforwardly legislate what we don't like about social media or even ban it instead of burying it under crushing legal liability - which runs the risk that some even-more-scourgey platform will avoid the liability. For instance, the most vapid parts of Instagram would survive because there are no opinions or ideas there, just photos.)


Google is under fire for removing political speech they don't like, not spam. This is where losing the 230 protections become a problem for them.


[flagged]


> What. Part. Of. You. Are. Not. Entitled. To. A. Platform. Do. So. Many. People. Have. A. Problem. With. Understanding.

Yeah. I get it. Used to be there.

The part where some companies get Section 230 protection. I mean, that's just a political debate away from death. Get it?

> The idea that "conservative" voices are being censored when the largest news station on the planet is a mouthpiece for "conservatives" and the 9 out 10 of the most shared articles on Facebook come from "conservative" sources is the most laughable argument.

Yes.

> I seriously can not wrap my head around any argument being made about "free speech" and platforms like YouTube, Facebook, and Twitter.

Platforms.

> We learned forever ago from that allowing forums to go unmoderated leads to the absolute worst people taking over that forum.

Sure. So why not jut enforce politically neutral moderation? Why is that so hard?!

You get the point, I hope. The argument that "politically neutral moderation" is impossible needs to be made. Not to people who were on BBSes, but to people who grew up on FB.

I used to have your view. The debate has... moved on. There's a really damn good point about the role that networks and platforms play in social discourse, which I think can't just be "you don't understand the internet!"'d away.


> His problem with Facebook/Tiwtter is perceived liberal bias

This seems to be because they live in a bubble where everyone agrees with them. But when they look at the real world they do not see the same. giving them the perception of bias, but there is none. They simply have an unpopular opinion.


Twitter is a bubble. In the public at large, Trump still polls at a better than 40% approval rating and Joe Biden easily beat Twitter darlings like Warren and Sanders.


> Joe Biden easily beat Twitter darlings like Warren and Sanders.

It's worth asking whether Biden's popularity relative to Warren and Sanders is actually an artefact of an under-use of preferential/ranked voting systems for polls and primaries.

To pick an example from last September[0], Warren and Sanders had 19.7% and 17.1% support, respectively, while Biden had 29.6%. That's not to say that Warren or Sanders would have had twice as much support if the other had ended their candidacy then, but it does cast doubt on the claim that Biden "easily beat" them.

[0] https://www.vox.com/policy-and-politics/2019/9/25/20882026/d...


Even if I grant 100% of your argument, nowhere near 29.6% of Twitter expressed support for Biden as their first choice.

Twitter is not representative of the general electorate.


> Twitter is a bubble

Yup.

So is Missouri's GOP.


A politically-appointed board won't do stellar moderation, but it sure will prevent the worst form of moderation that CEOs do.


Be careful of what you ask for.

Section 230 exists because the courts punished Prodigy because they tried to moderate their forums but did it imperfectly, but didn't punish CompuServe because they let anything go. The idea is to allow imperfect moderation in addition to both zero and perfect moderation.

The internet without section 230 isn't a bastion of internet freedom. It's 4chan and 8chan. It's a shithole.


More precisely, the internet without section 230 is two things: it's 4chan and 8chan on one side and tightly moderated corporate-run comment sections on the other (because you need extremely proactive moderation to avoid liability for things people post). You'll still have social media, because the world loves it, but everything will be reviewed by a compliance team at a big tech company instead of being available immediately. Smaller sites won't be able to staff a proper review team - you can still run personal blogs and let trusted friends comment, but you can't do things like run a Mastodon or a phpBB open to the public if you want to do any moderation at all (and if you don't do any moderation, 8chan will raid you).


Well, it's good to know it's not all bad then.

Between the corporately-curated snail mail and 4chan, I think social media will skew toward 4chan, which I'd gladly accept. Frankly, I like it even more than moderately-moderated social media; it's a lot funner. :D Shithole, yes, but charming and fun. But that's probably mostly a product of its anonymity.

Also, imperfect moderation is the thing that tends to annoy users most. With perfect moderation (like a blog with comments disabled?), you have no hope. With no moderation, there's no danger. With imperfect moderation, there's inconsistent or nonsensical bans, there's the urge to take chances and get punished, sometimes by capricious mods, and there's endless sidebars of rules to read before posting. Ugg.


I doubt that it will devolve into 4chan & 8chan which are abhorrent to common people, there's no money in doing that.

The internet will find a different way to appeal to the mainstream, probably by becoming more similar to cable TV & Netflix & Disney: practically eliminate amateur content and stick to professional, big budget productions.


I'm okay with imperfect moderation. What I'm not okay with is backdoor untracked political contributions under the guise of imperfect moderation. It feels, to me, that Twitter and Google have given Billions of dollars worth of political censorship/promotion/search bias. Let's get the FEC involved so we can measure and track this political spending.


I'm curious... What dollar value would you place on Twitter contribution to the Trump campaign? They have long failed to enforce their terms of use against Trump's account despite him posting tweets that violate them, despite other accounts that posted the same text verbatim having the tweets removed and/or being banned from the platform.

I can't think of any high-profile liberal politicians who have such a blatant disregard for the rules of the platform, so that must be what you're referring to, right? Or is your contention that rules like "don't use our platform call for violence" are themselves political censorship targeted at conservatives?


You might be interested in research by Robert Epstein which calculates how Google search rankings impact votes. He measured how many votes this changed and put some economic value on it.

Here's a quick interview between him and Larry King. His Congressional testimony is also really great, but it's over an hour long. You can find it on Youtube if you're interested. https://www.youtube.com/watch?v=xS3uETvzZZ0

To put it simply, Google decides who wins every close election. If that's the future you want, where tech oligarchs and/or a rogue lower level employee controls entire elections across the globe, then that's the world you have today.

Trump's tweets are a red herring. I'm not a Trump supporter.


This is absolute incoherent nonsense. Are TV networks that air press conferences giving 'untracked political contributions'? Is a network that chooses not to air a particular speech or event giving an 'untracked contribution' in the form of 'political censorship'?


I don’t think everything devolves to 4chan. But if I had to choose, I suppose I’d rather have 4chan than Facebook. But I hope we never have to choose between those extremes.


[flagged]


Start your own platform. Have the government run it. Compete using tax dollars.

Here's one system you could use to run your own platform where you make all of the editorial decisions, you pay for all of the moderation, and you pay for the fallout from attempts to manipulate public opinion: https://en.wikipedia.org/wiki/Mastodon_(software)


> Every single bad thing we've dealt with since 2010 has been started or exacerbated

I think this is a difficult to argue claim because exacerbated could be defined so broadly as to fit any post on social media.

> 4chan is bad, but go to Twitter's company page and look at the organizations they are working with on their policies. Every single one is a progressive, special interest group.

And yet Twitter has struggled with whether to ban actual, self-avowed neo-Nazis. No, not the ones that users call Nazis, but the folks who call themselves that. If Twitter and other social media are too alike to 4chan, it seems to me they have a lot further to go than just put some "progressive special interest groups" on their policy boards.

JK Rowling is welcome to air her views on the platform and users are able to express their disdain. That's just culture, y'know?


Can someone who supports "let's hold internet platforms responsible for what their users do on their platform" explain how that's any different than "let's hold gun manufacturers responsible for what users do with their guns?"

I fail to see a difference between the two, and think both are untenable fantasies.


Internet platforms maintain control over their system whereas gun manufacturers give away control to gun buyers.

The gun manufacturer ceases to maintain control and cannot be assigned responsibility after the sale of the good.


So I guess the same logic applies to the phone companies for calls and texts, since they maintain control over it?


I don't know what the right answer is but as far as analogies go the phone companies don't decide which calls you get and which of your incoming calls/texts to make a priority.

Twitter and certainly Facebook do run algorithms that decide which of the 300 things the 1000 people you friended or follow should be displayed and in what order so they get decide what to emphasize as important by deciding what you see first and even what you see at all.

In order to be the same as the phone company they'd have to give you the entire feed of every person you follow in chronological order, no filtering.

I suppose one solution for them. They could default to no filtering and let users turn on filters. Then it would be the user's choosing the filters, not them. I'm not sure what scrutiny the filters themselves would need.

A simple filter like "show nothing with the word 'poop' in it" would seem safe. A complex filter like "let this ML algo decide for me" I have no idea.


The problem is that right now these big Internet companies aren't "dumb pipes" like phone companies--they are moderating content already, so either they shouldn't moderate at all or they should accept responsibility for how they moderate. If they're making money by persecuting certain groups or enabling child porn or whatever, they should be held accountable. If they're just dumb pipes, then that's fine, but they mustn't moderate content.


I don't think it's so cut and dried. For instance, if you sell a defective gun, and that gun kills the shooter instead of the target, you can certainly be assigned responsibility. Liability doesn't end when something is in someone else's hands. Selling something you know is dangerous, than you know can harm, brings with it is own liability -- contaminated lettuce for instance.

There are a number of implied warranties made when a transaction occurs. [1]

[1] https://www.investopedia.com/terms/i/implied-warranty.asp


American gun manufacturers, like pretty much any other manufacturer, can indeed be sued if their product is defective. For example: Remington has caught a lot of heat for defective triggers in their Model 700 series rifles.

When people say American gun manufacturers can't be sued, they're talking about the PLCAA, which shields gun manufacturers from lawsuits concerning guns they made being used in crimes. The PLCAA does not prevent them from being sued for defective products.

https://en.wikipedia.org/wiki/Protection_of_Lawful_Commerce_...

https://en.wikipedia.org/wiki/Remington_Model_700#Controvers...


>American gun manufacturers, like pretty much any other manufacturer, can indeed be sued if their product is defective.

The parent is explicitly calling out that gun manufacturers can be sued for defects in the product, and as such the idea that manufacturer liability ends after they have sold the product is patently false.


So if a gun manufacturer did keep control over their gun, like with an electronic targeting system making decisions on behalf of the user[0], by the control argument, wouldn't they then have a responsibility to make sure it's used appropriately?

[0]: https://www.tracking-point.com/ (Yeah it's an aftermarket product, but for argument's sake, let's say it was 1st party)


You really misunderstand the tracking-point platform. It's a holdover system to calculate windage and drop on a moving target locally on the system, not some kind of visual analysis engine that does target determination.

It provides "Hey, you need this much windage" not a shoot/no-shoot decision. It's also entirely on-system so the customer owns it, it's not running somewhere in the cloud.


You don't see the difference between "let's hold companies responsible for what people do as part of utilizing their services, while utilizing their services" and "let's hold companies responsible for what people do with an item they have purchased once entirely out of the supervision of that company, without any possible oversight or control"?

I can't hold a skateboard co. responsible for what people do with skateboards they've purchased. I can most certainly hold a skate park responsible for what happens in the skate park.


By that logic you would hold the phone company responsible for calls and texts happening over their network.


A phone company doesn't have algorithms that decide which calls go through and which don't, or which calls are marked as important and which are not.


They don't? What about VOIP?


Phone calls are private communication. Isn't there's a law that prevents screening private communications?


Someone correct me if wrong but I think phone co's can get away with it as they're deemed an information carrier instead of a host. Besides, calls and texts are rather benign, it's visual media that tends to attract the most calls for censorship, not to also mention that real-time censorship of text and voice would be infeasible but much more viable for an information/media host.


And phone companies don't play an editorial role in determining what calls you get to hear, and who hears yours. Newspapers are responsible, even when they're publishing someone else's submission to them.


That's a strawman.


That is precisely the issue. Phone companies don't maintain control of their network to filter what calls and texts are allowed, don't maintain political rules on their network, etc. They are neutral carriers.

Meanwhile google etc are pretending they are neutral carriers while actively filtering and choosing what is shown, now politically, showing they not only have the capability to control their platform but put it in practice.

If they cannot allow others free speech they should not get special protection that prevents their liability.

If a phone company started filtering political discussion I would absolutely think them responsible now for everything on their network. Your example is perfect to show how different both groups are in capability and practice.


Okay, so are electric companies responsible for people growing weed?


No, I really don't see the difference, because the service here is the product. The service is not a place of accommodation like a skate park.

"Company makes thing, people do bad with thing, hold Company responsible" is a scary line of thought, and that's exact same scenario for both Facebook and guns.

"Thing hurts person using it" is closer to your skate park analogy, and yeah, in that case, of course Company should be responsible for making a bad thing.


It is more of a pre-elections warning shot. I expect a lot of negativity from both sides in November. So politicians cant allow platforms interfere.


Gun manufacturers regulations and internet platform regulations are quite two different subjects with very little overlapping.

If a platform controls and exercise editorial control when something is said, what is said, and who may speak and who may listen, then it may be useful to hold that platform liable. It is about intent, control and power.

Gun manufacturers regulations however is full of rules that is about protecting society and international agreements. Selling guns to countries currently at war is problematic, so we hold those manufacturers responsible if they try to profit from running guns.


Really, even though the platform you're using voluntarily chooses what you're allowed to post, as with almost all online platforms?

Besides that, you're framing this wrong. If I host a platform as I host you in my house, I have a right and a duty to make sure you aren't committing illegal actions within my domain. This is a fairly universal law, written and unwritten, that who and what you host in your domain is your responsibility. Why should a few privileged platforms get a free pass?


I'm pretty sure the argument "if you invite someone into your house, you're responsible for their illegal actions" is plainly incorrect, but I'd be happy to see a citation of that statute.


There is a common exception in law for utilities. The power company is not responsible for whatever nefarious activity is enabled by supplying power. If you have an illegal marijuana grow operation the power company doesn't get prosecuted nor are they responsible in law for proactively identifying those growers. ISP and service providers don't want to be utilities because that requires that they provide a level playing field to their customers, but their business has all the characteristics of a utility and their customers would pretty much all prefer that over the current situation.


You may not legally be held liable though in many cases you can. It's a common trope within law and I'd guess derived from common sense that if you host illegal content then you are therefore bound to scrutiny for potentially enabling it. So by extension, if you care about being a free person, you are responsible for it. Anything illegal in my home and I'm bound to scrutiny and law. Why should big co's get a free pass? To me, I couldn't care less either way but consistency in law would be just lovely.


>"let's hold gun manufacturers responsible for what users do with their guns?" I fail to see a difference between the two, and think both are untenable fantasies.

Sometimes the arms dealers (sellers not manufacture) are liable for the actions of the gun owners.

There is no shortage of cases, I searched Walmart (because Walmart scale), but some examples:

1. Walmart Settles Lawsuit for Selling Gun Used in Murder by Neo-Nazi (https://blogs.findlaw.com/injured/2018/11/walmart-settles-la...)

2. Wal-Mart sued over sale of bullets used in Pennsylvania murders (https://www.reuters.com/article/us-pennsylvania-bullets/wal-...)

Sellers of guns and ammunition assumed they were protected from liability by the federal Protection of Lawful Commerce in Arms Act.


> Sellers of guns and ammunition assumed they were protected from liability by the federal Protection of Lawful Commerce in Arms Act.

On that angle, why are the folks arguing "a few big tech companies have had immunity for too long, they have too much influence," not also arguing "a few gun manufacturers have had immunity for too long and have too much influence?" To me, providing a tool to kill and facing no consequence is a lot more influence than Twitter enforcing their terms of service.

To be clear, I don't think weapons companies should be responsible for what people do with their products. I certainly don't think tech companies should be responsible for what people do with their products, either.


Let me know when the CEO of Glock or Colt comes and takes back the gun I purchased from them due to a Tweet of mine that the mob disliked.


Well, there are rumblings of various smart guns, where you gun ownership ( and presumably various socially acceptable exceptions like being a felon ) are checked upon attempting to use it. I am not a fan of MGS game, but the author may have seen a small portion of our future.


>On that angle, why are the folks arguing "a few big tech companies have had immunity for too long, they have too much influence," not also arguing "a few gun manufacturers have had immunity for too long and have too much influence?

Not only is that a whataboutism, but its out of touch with reality...there are millions of people who believe there should be liability and giant organizations lobbying for liability gun/ammo manufactures and they have been successful, because we are now seeing liability in the courts.

Its law, think how it took decades for big tobacco to be liable when they not only knew of the dangers of their products but they actively and intentionally withheld that information from the public.

>I certainly don't think tech companies should be responsible for what people do with their products, either.

Do you have a bias? Do you work in tech?


Pretty certain that settlement admits no guilt.


If you bothered to read the articles I linked they include many other examples such as:

>last year, in a Wisconsin case, a jury found a Milwaukee gun store liable for selling a gun to a 21-year-old customer


The liability springs from enabling a straw purchase the plaintiff alleges should have been obvious and therefore the sale should have been denied.

I imagine there is evidence (video, witnesses, etc.) that tend to indicate that the dealer knew or should have known he was facilitating a straw purchase.


Yes, you need a underlying cause of action...that's how law works.

The point is liability, gun manufactures and dealers can be liable for the products used in killings by third parties...even after lawful sales.

Take the case of the Sandy Hook victims that sued Remington. Initially their case was dismissed, because the lower court rules the manufacture is shielded from any liability under Protection of Lawful Commerce in Arms Act (PLCAA), but on appeal the Court overturned the ruling and declared in fact the victims families could sue under State law on separate causes of actions/theories. In that case they were suing Remington for violating the States Fair Trade Practices Act (on the factual basis that Remington marketed military style weapons to civilians). So you could just as easily shrug that of and say "well, liability there sprang from...", of course liability has to spring from somewhere.

So in the case of tech, if you wanted to sue the platform, you need an underlying cause of action...whether that may be defamation, or trademark infringement, or copyright infringement. Liability for the tech platforms would have to spring from somewhere just like any other cause of action.


I guess I was differentiating plain old negligence from the lawsuits that specifically go after immunity carve-outs in the PLCAA.

Interestingly, most of the reporting I was just reading suggests that the Sandy Hook case has been allowed to move forward because makers and sellers lose their immunity if they "knowingly violated a State or Federal statute applicable to the sale or marketing of the product." This sounded like an overly broad immunity carve-out to me, but if true it seems reasonable that the case is still alive.

But of course, the reporting is not complete, it leaves out the second prong of this carve-out "and the violation was a proximate cause of the harm for which relief is sought"

I can't imagine the Sandy Hook plaintiffs proving or even providing evidence that supports the second prong. It will be interesting to see how this case unfolds. Hopefully it will not settle until all the appeals have been exhausted.

For Section 230, the immunity carve-outs are much clearer and limited. The tech industry must have better lobbyists than the gun industry.

Though if Biden is elected they may have problems:

In January 2020, former Vice President Joe Biden proposed revoking Section 230 completely. “The idea that it’s a tech company is that Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms,” Biden said. “It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false.” Biden never responded to follow-up questions about this statement.

https://www.theverge.com/21273768/section-230-explained-inte...


>plain old negligence from the lawsuits that specifically go after immunity carve-outs in the PLCAA.

Negligence is a cause of action based on elements of duty and breach of said duty. So the question becomes what was the duty of the manufacture and how was said duty breach (the theory of the claim is very fact specific, there isn't really a "plain old negligence" theory). You are right about the proximate cause, that is another element of negligence claims that must be proven, meaning there must be a proximate cause between the breach of the duty and the damages...meaning there could be a duty and even a breach of said duty, but no liability because the breach was not the proximate cause of the damages (this really gets into the weeds of case law).

>The tech industry must have better lobbyists than the gun industry.

Its not about the quality of the lobbyists, NRA and big tech likely use the very same lobbyists, its about spend. Because of the proliferation of mass shootings almost every knows about the NRA and pro gun rights groups, meanwhile almost no one knows about big tech lobbying, and they might be surprised to learn how much big tech outspends NRA/gun rights...big tech spent $500M in the last decade lobbying.


lawsuits and settlement are not legal liability, they are just sometimes cheaper and better PR to settle.


Read the articles:

>in a Wisconsin case, a jury found a Milwaukee gun store liable for selling a gun to a 21-year-old customer...The gun was later used by an 18-year-old to shoot and critically wound two police officers, who were awarded damages by the jury.

Jury verdict (legal liability) against the seller of the gun which was used in a murder by someone other than the person they sold it to.

FYI: Walmart doesn't settle for cost or PR, they settle because of liability.


>Jury verdict (legal liability) against the seller of the gun

A jury verdict in a different case with different particulars doesn't immediately imply that walmart is guilty. The particulars of both cases matter. Being sued and settling does NOT imply legal liability, and someone else winning a different case is also not legal liability for walmart.

>Jury verdict (legal liability) against the seller of the gun which was used in a murder by someone other than the person they sold it to.

Case particulars matter.

>FYI: Walmart doesn't settle for cost or PR, they settle because of liability.

This is just not true. Walmart does things for PR all the time, including settle lawsuits they know will make them look bad.


The liability in these cases was related to alleged negligence related to facilitating straw purchases. And, in the PA case, possibly selling bullets to an underage buyer.


As mentioned before that's how law works. Before the erosion of the federal law protecting manufacturers and sellers of firearms, cases with the very same set of fact would have been dismissed, because there was a law shielding them from liability.

The parties have to be liable for something...and the same would be true with tech companies if their current federal protections were removed, they would have to be liable for some valid legal claims such as negligence, defamation, copyright infringement, trademark infringement.

The point is removing the federal shield the multi-billion dollar companies lobbied for to protect themselves from lawsuits, so when their is an otherwise lawful claim for which they are liable they can be sued.

Honestly, how did you think it works?


In a nutshell, this would require congressional approval to pass. Both parties have expressed desire to alter the current legal protections that internet firms have, but it’s not clear if there will be a bipartisan consensus on what this change will look like if/when formal bills are introduced.


Perhaps the most important line from the article:

> The Justice Department proposal is a legislative plan that would have to be adopted by Congress.


This is an important point. Given that Congress is divided between the two parties, the chances of something like this becoming law are zero. So why is the proposal being made? It's a presidential election year, and the president is working the refs, trying to scare them away from anything that might make it even a little bit harder for him to get his "message" out.


>The Justice Department also will seek to make clear that tech platforms don’t have immunity in civil enforcement actions brought by the federal government, and can’t use immunity as a defense against antitrust claims that they removed content for anticompetitive reasons.

Oh boy...the costs of running Google, Twitter, Facebook and others... will quintuple overnight when Congress passes this.


It's wonderful to see the price of censorship by colluding monopolies is going to skyrocket.

I can't wait until the fines start raining down. They'll have earned every cent of the financial damages. The arrogant, biased platforms picked a fight they can't win with half the political power in the US.

This rapid, broad shift is why Larry and Sergey ran for the hills not long ago, abandoning Alphabet as fast as possible; they saw what was coming (including the anti-trust investigations). I bet they destroyed as much of their internal communication history as possible as well (legally of course, probably), so it can't be used against them or the company.


It’s surreal that a popular Republican position now is: “We want the government to impose giant fines and new regulations on the most successful American businesses of the past decade, hopefully destroying them.”

Regardless of the merits of the idea, it’s an amazing paradigm shift for the GOP.


I don't think that is the position. I think they are just tired of a perceived bias against the right by left leaning aggregation organizations compounded with the cancel culture.


It is still their position - that persecution complex is just their rationalization for their hypocrisy and abuse of power.


What about the perceived bias against the left by right leaning news and radio organizations? This is dishonest partisan hackery.


That's the thing, it's perceived and not real. Media outlets driving that narrative are lying to their viewers for clicks. If someone refuses to drop their bias in face of contrary evidence or that evidence isn't even presented alongside the initial claim, they may be watching an entertainment channel, not a news channel.

https://thehill.com/opinion/technology/440703-evidence-contr...


I can’t speak for Twitter and Facebook, but it’s absolutely real on Reddit. All one needs to do is look at how they treated r/the_donald compared to r/politics.


And seemingly all because these platforms deign to accurately represent how much opposition there is to the Republican agenda.


A small tweak of moderation policy shouldn't destroy them.


It is not paradigm shift for GOP. Libertarians are not the only part of GOP.


Hint: this isn't new regulation, this is rolling back a regulation that is defending these publishers pretending to be platforms.


Ah, seeking to mess with Section 230 again, just like with the EARN It Act. Any company that stays headquartered in the USA if this passes is just begging for trouble.


so funny considering the history of 230 and how prodigy was the inspiration for it because they modded user posts


That really doesn't change anything. If you want to do business in the US (and everyone does) then you're subject to US laws. "Jurisdiction" here is simply a question of how a country defines it and is willing and able to prosecute it.

For example, if two US citizens on US soil discuss insider trading of an Australian company that does not even do business in the US using trades on US brokers, those two individuals are in violation of the Australian Corporations Act and can be criminally prosecuted (by Australian authorities). Why? Because Australia claims jurisdiction over any Australian company.

Likewise, "sex tourism" with children in South East Asia is rampant and many countries are unwilling or unable to prosecute. Australia has deemed having sex with an underage person in a foreign country is likewise a crime in Australia that they can and do prosecute.

The US is able to to exercise a lot of power with international banks because they have the power to remove a financial institution's access to the US banking system. It's this stick that allowed the IRS to go after Swiss banks for complicity in US citizens evading US taxes.


That's what makes laws like this so silly. Look at how companies split into pieces to avoid corporate tax. They'll simply do the same here. Such a waste of time and effort and it really only affects the small companies which don't have this problem anyway. So dumb.


Yeah all those dumb secretaries, presidents, senators and the people who work for them, they really ought to be reading more HN to learn about how things work.

Unless of course any company whose viability materially depends on Section 230 protections would have been dead in the water pretty much in any other jurisdiction on earth in the first place, the threat of moving out of the US was a naked bluff and the powers that be realized it.


They have a lengthy history of technically illiterate demagoguery and utterly idiotic statements about it while only paying attention to what their lobbyists feed them.


What does this have to do with technical literacy ? They're not writing an RFC and there's nothing technical about section 230 of the communication decency act.


Yep.

If companies are willing to put up with the Chinese government telling them what to do, what to say, and how to operate their business in China, I'm pretty sure America will be fine.


The GDPR was objectively far more consequential and I don’t recall a mass exodus of companies from the EU.


There weren't that many companies to exodus to begin with. But I'm not even in the EU and I do still encounter GDPR "access denied" type pages from time to time from US sites.


Pray tell where should such a company go ?


Ireland looks good, at least on paper. I'm attached to the U.S., but if I was looking at relocating to Ireland I'd want to talk to people I knew who are on the ground there, or at least friends of friends who are. A puff piece, but has some interesting tidbits: https://www.forbes.com/sites/shourjyasanyal/2018/11/27/is-ir...


And I suppose you have done research on the Irish legal system, its liability protections for 3rd party content providers, its jurisprudence on hate speech, defamation and libel ? (hint: it's part of the EU)


I have not, and never claimed I did. I said it looks good on paper. A number of sources have said it is startup friendly.

If the EU's laws are so burdensome, why is there a thriving startup ecosystem in the EU? I have read about EU's stances on hate speech, defamation, and libel (though I wouldn't call that hobby reading research), and I am fine with their stances.

I think we could use more hate speech protection, when I see reports that as much as 60% of the tweets in the current U.S. political conversations are done by biased bots.

And no, I am firmly against EARN IT and the other 230 attacks. We need internet legislation that is thoughtful, created by technical SME staffers and constitutional law SME staffers, not broad-brush legislation pandering to votes, FUD, or special interests.


> If the EU's laws are so burdensome, why is there a thriving startup ecosystem in the EU? I have read about EU's stances on hate speech, defamation, and libel (though I wouldn't call that hobby reading research), and I am fine with their stances.

There is nothing particularly thriving about it. There are just a lot of them because it costs almost nothing to incorporate a company and 'startup' sounds cooler than a 'small consulting business' or a 'software house'.

Can you name many EU startups that IPOd or got acquired for an impressive amount during the last decade or were even considered "unicorns" at any point by anyone other than themselves ?

In any case, Section 230 isn't about startups per se but specifically about 3rd party content providers - know any such companies that are EU-based ?

> I think we could use more hate speech protection, when I see reports that as much as 60% of the tweets in the current U.S. political conversations are done by biased bots.

Have you considered the possibility of those reports being disseminated by biased bots ?

> And no, I am firmly against EARN IT and the other 230 attacks. We need internet legislation that is thoughtful, created by technical SME staffers and constitutional law SME staffers, not broad-brush legislation pandering to votes, FUD, or special interests.

You sound confused. How do you expect those hate speech protections to materialize if not by modifying or cancelling Section 230 ? I mean there's always the option of cancelling the 1st amendment - no one seems to like that pesky thing anyway today.


> Can you name many EU startups that IPOd or got acquired for an impressive amount during the last decade or were even considered "unicorns" at any point by anyone other than themselves ?

In Addition to what sobani said:

HelloFresh, Transferwise, N26, Revolut, Telegram (if you stretch the meaning of Europe), Klarna, Auto1 Group to name a few

Of course not all of them operate globally but that isn't a requirement


> Can you name many EU startups that IPOd or got acquired for an impressive amount during the last decade or were even considered "unicorns" at any point by anyone other than themselves?

You mean besides companies like Spotify, Rovio (Angry Birds) or Mojang (Minecraft)?


That's really not that much to show for an economic superpower of almost half a billion people and none of these companies rely on the protections offered by Section 230. I mean maybe if you got creative enough in minecraft you could trigger some but it's owned by Microsoft now so you'll have to deal with their censorship first.


We have different ideas of success. My idea of a successful startup is not that it must reach unicorn status, but that it either has a successful exit where the founders and early employees comfortable enough that they can spend time on their next startup adventure; or that the company survives and continues to grow.

You place a lot of value in unicorns, but there are downsides to shooting to be a unicorn too. If you make it there you may be fabulously wealthy, but you'll have so much investment that most lose control of what they created and many succumb to bad investments that have so much dilution the cap table becomes upside down. Also most who aim for unicorn status ("I want to be rich!") fail. I'm ignoring bubbles, because if you have something that has no real contribution but rides/sparks a fad, then you can still get a lot of investment during a bubble - don't know if you could get to unicorn status though.

As for those companies not being capable of being platforms for hate speech: Minecraft obviously, Spotify is music so that should be a clear possibility, Telegram is messaging so that is also a possible. The ones that are financial apps are unlikely hate speech platforms, unless they allow comments or reviews.


Switzerland seems to be a popular choice, curious if there any big negatives there (business-wise)


Well for one it's a super expensive country and the cost of doing business is huge, but we aren't talking about general business-friendliness rather about a very specific set of legal protections provided by Section 230 enabling the business models of facebook, twitter, google, reddit et al.

I don't know the specifics of Swiss legal system in that regard but the fact that nothing similar ever came out of there probably says something.





I wonder if the colloquial understanding of platforms and ownership got bad for reasons even aside from the blatant propaganda of special interests.

Back in even the 90s and 00s even the dim bulbs responding to other dim bulbs like Yahoo or AOL doing dumb stuff like shutting down child molestation victim support group channels from ham-handed attempts to try to moderate didn't lead to any idiots thinking that the government should somehow punish them even though it was rightfully called stupid and morally wrong. Was it because they actually understood the internet existed as many small sites as well as the big names?


When social media firms ban conservative voices, they need to be sued for interference with interstate commerce. Because that's what it is.


This is a gross misunderstanding of the Commerce Clause. It does not, and has never, placed any responsibility on private businesses to facilitate interstate commerce. (Nor is it clear that publishing an online posting is even a form of commerce.)


I'm experimenting with Markdown in this comment and may edit to reformat links better. Actual comment:

Specifically when conservative voices are banned or a voice of any leaning? If a voice, right or left, tweets supporting hate or violence should that be removed with equal prejudice or left in place regardless? If it's a left wing voice posting bannable content 9/10 times, is that unfairly banning/censoring of left wing voices or simply the ratio that such ban-able content occurs? Does that cross over into publisher status? I don't think so.

I ask as the outrage over specifically conservative voices being censored has less to do with reality and more to do with loud people wanting attention as the bias against them is [mostly made-up](https://thehill.com/opinion/technology/440703-evidence-contr...).

There isn't censorship targeting right wing comments, just removal of extremist comments that often catch vocal right wing groups MORE OFTEN than left wing voices. There are loud people on the right (and some on the left) spouting misinformation, disinformation, debunked conspiracy theories (think QAnon), or outright threats and lies. These loud people and groups, when their content is removed, get more loud and right wing media platforms embrace this because it feeds on a common [Persecution Complex](https://en.wikipedia.org/wiki/Persecutory_delusion) that is largely non existent and often linked directly to some level of what some call privilege (White/Rich/establishment/etc). Yes, there are left wing media platofrmas that do the same but they are not in any way as close to the audience size as Fox News. They aren't even 'news' as in October of 2018, they [specifically noted in their ToS they are entertainment](https://mediabiasfactcheck.com/fox-news/). If you're getting the idea that banning of conservative voices is censorship from any voice/commentator hosted by Fox News, that's not news, it's entertainment!

I mean, look at this [summary of legal cases](https://www.theverge.com/2020/5/27/21272066/social-media-bia...). The majority are from conservatives with 1 item from a democrat and most are quickly struck down as the complain is founded on an inaccurate idea of the first amendment, namely that the users rights were not infringed and the lawsuit attacked the platforms rights! The platform has the right to not let you use it for your inaccurate, biased, or even malicious content. You are not infringed upon by being removed from that platform for those issues specifically or likely for anything the platform deems is against their ToS or rules. In r/conservative, this means any dissent from the established norm, even just pointing out polling data that invalidates the headline, is an immediate ban. That's allowed by Reddit and not under the jurisdiction of the government. It's not lawsuit worthy either. My rights aren't being infringed if a subreddit wants to be an echo chamber of misinformation protected by heavy censorship. There is some schadenfreude when that sub complains of censorship of right wing voices but rejects any sources stating it's not reflective of reality.

As a comparison, consider how climate science is presented across media. 99% of scientists agree Climate Change is aggravated by human activity and something we need to tackle; so does the Pentagon. 1% are the counter voices saying it isn't an issue or human activity is not a factor. These sides are then presented as equal (which is misinformation) and given equal weight like debated 1v1, not 99v1 like in reality. Same thing for right wing voices and voters across the US, [they are a minority](https://news.gallup.com/poll/15370/party-affiliation.aspx). 25% of those polled identify with the GOP even though the GOP holds more than 51% of seats in the Senate and after the 2016 election held more than 51% in the House for that term. When this minority holding a majority is 'attacked', everyone on connected outlets is going to hear about that (remember on some platforms this is entertainment, not news). That's basically the commentators first amendment right to protest the ban and definitely allowed under free speech. That doesn't mean it's an accurate portrayal of reality just like the censorship of right wing voices _seems_ biased but really isn't. In both cases, the minority is extremely vocal and active disguising (or simply ignoring) data stating otherwise. Conflict, even artificial, drives clicks and revenue and that's what entertainment is all about.

My bigger question is how is this interference with interstate commerce? I could see that argument applied to an influencer or commentator who is removed from their primary platform. If they lost revenue they may even have standing. But that still isn't an issue for Twitter, you can be removed from any platform you don't own and should always have your own site and system set up for hosting your content. That's preached by internet first media groups since YouTube rose to prominence. But it isn't a violation of interstate commerce.


Can you clarify what you meant with "experimenting with markdown"? (since I see you're not a new user)


Markdown is a common formatting system for github, reddit, and a few other sites. I like how it works so have been looking up more complex how to's and messed around with comments on some sites to see which may support markdown formatting. One thing it lets you do is hyperlink sites where [This would be the link itself](and the actual website goes here) like [Google](www.google.com). That doesn't work here, but I gave it a shot as I've been linking that way on Reddit all week when needed. Markdown is neat, but also a buzzword for simple text formatting.


Thanks, I didn't need to know what Markdown is, but thanks :)

I asked you because it seemed strange that a user with 120 points and 9 months of usage didn't know yet that this site doesn't support Markdown or anything close, and I wondered if they added some support lately without me noticing.

The only markup/formatting supported by Hacker News is what's described at https://news.ycombinator.com/formatdoc .


I don’t see this passing constitutional muster. You have a right to free speech as do corporations - you can be ejected from a privately owned building for saying things the owner doesn’t agree with, the same applies to online platforms.

This is a open and shut first amendment case.


> This is a open and shut first amendment case.

Yes, it is, but the outcome would be the opposite of what you expect.

constitutionally, private corporations can't be censored or compelled to speak when the speech is 1A protected speech.

but not all speech receives 1A protections, and Section 230 is about the type of speech that isn't protected by 1A. E.g., without 230, corporations could be sued for users' libel or held criminally liable for helping distribute lots of different types of speech (e.g., making terroristic threats, distributing child porn, facilitating illegal acts, etc.).

So, a case that hinges on 230 protections would be open-and-shut if 230 were repealed. Just with the opposite outcome of the one you're expecting.


Not even close to open and shut.

You realize that an editorial posted on the NYTimes or Fox News site can get them sued for libel or defamation, right? This does, in fact, happen all the time. Read about several of the left leaning cable networks and their lawsuit with the 'Covington Kids'. They are a publisher, and are responsible for their content.

Google, Twitter, Reddit, etc. acted like platforms for many years, same as the telecoms, and nobody ever complained. Look at how much influence and cash they have. What could possibly cause them to risk this gold mine?

Donald Trump became president, and the liberal companies and employees in Silicon Valley / West Coast lost their nerve and resorted to censorship, decided to use their 'platforms' as their own private political tools.

Oops...

Now they can enjoy the same restrictions that other publishers have always had to deal with. Hopefully their shareholders realize the source of the issue, and boot the activist executives and employees, as it's now going to cost them a lot of money, all of which they brought on themselves.

* https://www.washingtonpost.com/lifestyle/style/cnn-settles-l...


> as it's now going to cost them a lot of money

As one of those shareholders, from a purely profit-motive, I'm not so sure that replacing 230 with something more onerous would be a net negative.

Moats are expensive, but also valuable.


Should I be able to sue YC when Hackernews commenters say something objectionable? That Paul G money sounds nice.


And this is all because the President got mad at Twitter on the right and the left always think more government is the answer.

This is what happens when you get government involvement in tech.


No. Nothing is ever this simple. Even cursory search would show that this is an ongoing saga linked to LEOs displeasure with encryption. I am not a fan of Trump personally, but no reason to let it cloud your judgment.


It literally is about Trump being angry that Twitter fact checked his outright lies about mail-in voting.

There are absolutely other powers that stand to benefit, or are pushing the agenda as well, but to claim it's not a result of Trump's hissy fit is ridiculous.

https://deadline.com/2020/05/donald-trump-twitter-social-med...

The executive order was signed by exactly one individual who was completely open about why he was signing it.


This was being proposed by Barr and co a few months ago, the current Twitter debacle (A tweetacle? Tweetgate?) just happens to be an excellent case to push it forward with support.

Before that it was proposed repeatedly by several Democrats, especially during the Birther nonsense.


I guess it's a good thing I already said other parties had an interest in the topic at hand.

Unless you know something I don't, Barr doesn't have the power to sign an executive order. Neither do any of the Democrats. There's exactly 1 reason the executive order was signed and it wasn't Democrats or Barr, nothing I stated was inaccurate. Appreciate the downvote because you don't like reality.


I think you are assuming that the order is a cause rather a convenient excuse to pursue a specific agenda. I appreciate the approach, by a person signing it is just a convenient lightning rod. Note that it seems to be working very well.


A convenient excuse for what? The president doesn't need an excuse to sign an executive order, he needs exactly 0 permission from anyone to do it. This isn't some new law drafted by congress that needs political cover to move through the process, it's an executive order.


I think I am going to stop here. I am not sure you are open to a discussion.


If he cares for re-election, he still needs excuses from time to time...


The DOJ report that was just released was the result of a year-long investigation by the anti-trust division of the DOJ. While it's release my have been accelerated by Trump's executive order, it certainly wasn't caused by Twitters recent fact checking, and it doesn't directly address some of the points in the executive order. Rather it focuses more on "big sites aren't doing enough to combat crime" than "conservative voices are being suppressed".


That's just one scenario. The more broader one is tech companies refusing to adhere to preventing federal crimes from being committed on their platforms.

At the end of the day, it's expensive to police your platform so they went with the immunity route.


They're a platform, andaren'taspecifically vested law enforcement organization.

Think about it. What you're really asking for there is for these companies to become part of the apparatus, which means by definition they can no longer be seen as private businesses and arguably, become unfit to do anything on.

Imagine a world where law enforcement is given access to an oracle by tech companies capable of spelling out every individual who broke any law in any jurisdiction at anytime, anywhere today, so long as they use the platform.

If you don't already feel uncomfortable, or see why that would impact the desirability/feasibility of the system already, you're probably not trying very hard. You're basically handing law enforcement a tool capable of handing reign of the country over to the auspices of prosecutorial discretion.

The one thing that has kept LE in check has always been the high price tag of due process involved with depriving someone of their rights. This means it is strongly confined to only that potential pool of people that generally meet the criteria for clear and present threats to society (your agreement as to the priorities over time may beg to differ, bit the point is it is fundamentally limited to a very small fraction of the population). With the integration possible through tech, you cannot afford to happily hand over records in digital format. It is simply too bloody dangerous a tool in the wrong hands.

Furthermore, no one wants to accept that sometimes societal goods are bundled with the empowerment of bad use cases since malicious actors are equally buoyed by societal infrastructure. I don't see people clamoring for TV manufacturers to start recording the inside of homes because there may be pedophiles in them. I don't see people swarming in droves to say "eavesdrop and surveil me" once they know that is essentially what goes on in tech. No one wants that. They grudgingly accept it because no one else wants to or can figure out how to implement something without that that also allows the type of information propagation people actually want, which is for people they want to know more about them to have greater access, and the rest to actually require substantial effort to get at that information.


You are right. Even parent post is right in a sense that there multiple reasons. I think your explanation covers tech sector perspective ( they are in a sweet spot now and don't want to lose it ). I maintain that the main reason it is happening is the steady push to make policing easy again.


Why isn't the market holding Facebook accountable for the numerous transgressions we've seen coming out of that company over the last several years? Because people don't understand or care how the money is made, which fundamentally undermines the argument that the market is always right.

From broad, repeated invasions of online privacy to numerous scandals involving state-sponsored disinformation campaigns, Facebook shows time and again that they are not responsible corporate stewards of the internet.

Zuckerberg has got to be one of the least popular Fortune 500 CEOs and yet he's completely invincible, investors don't want to touch him.

So how do you propose to hold such a company accountable if not through regulation and oversight?


Investors don’t want to touch Zuckerberg because they simply cannot. He has enough shares to be unremovable.


I don’t believe this is true. If an Icahn-like activist investor shorted FB and successfully triggered a mass sell off tanking FB stock, it would be really hard for Zuckerberg to survive that pressure, and he’d be financially incentivized not to. You’re right in that the board doesn’t have the power to oust him, but collective investor action could. It’s hard to find people who want to rattle one of the most consistently performing tech stocks, though. Unfortunately, I think the chance of something like this happening is nil.


Do you think zuck cares if he’s worth 100b vs 20b?


Facebook was fined $5,000,000,000 by the FTC for what the president's chief executive of campaign's company (Cambridge Analytics) did: they used military info ops on American people in an election.

And now they want editorial control over publicly owned information services?

Why don't they (the installed alt-right) go claim that the government deserves editorial control of the papers as well?

Clearly that's the issue needing attention here; not obstructing an obstruction investigation.

https://en.wikipedia.org/wiki/Facebook–Cambridge_Analytica_d... and up a bit further: https://en.wikipedia.org/wiki/SCL_Group

The current administration just gave away $500,000,000,000 of taxpayers' money and now won't even release the list of who took the money?

And you think that the problem is Facebook's editorial transparency and lack of accountability?

Kindly review https://usaspending.gov and tell me where our money is going


You assume that they have the same values and definitions of transgressions. That just plain isn't true no matter how "obvious" it is that something is or isn't one.

Besides blaming them for being unable to stop state actors is a bizzare form of victim blaming - complicity would make sense sure but not inability to stop it. Asking to be both stronger than and weaker than states at the same time is an unusual demand.


The market had spoken. They don’t care.


My friend... Tucker Carlson was the most watched news anchor in all of America last week. It is apparent that a good portion of the population doesn't find Facebook's decision to allow free speech at all objectionable.

Have you considered that the reason why most people are okay with Facebook not punishing president Trump is because many people agree with him? Have you considered that -- facebook having more active users than twitter -- that facebook is more representative of the United states than your feelings?

Twitter is known to be mainly popular among the media, who -- shock and surprise -- don't like trump and lean liberal. Facebook is popular amongst the rest of us. Is it possible in your worldview that other people may not think like you?


omg... right-wing developers are a thing =:o


Congratulations Twitter! You improved the Internet in the same way as the Internet Archive by pushing too far.

Hope you are satisfied with all that awesome power.


I agree. I used to believe in absolute freedom of speech on the web. But then people start sending goatsx or whatever as joke in emails. I learned to avoid opening any links form certain friends.

MySpace, Facebook, and Twitter was nice clean space to hangout for a while. Then horrible and traumatic pictures and videos start showing up in my feed. I know the world is horrible place but I don't need constant reminder about it. I unfollowed as many people as I can.

Now as a parent, I cannot constantly monitor these supposedly safe sites. I have seen disgusting or violent videos on YouTube for Kids, Amazon Videos aimed at kids, and even some kids shows on Netflix.

These platform should be responsible for the content they host, no matter who uploaded. That would be one way to clean up flith.

That's why I will pay for cable TV again and let someone moderate content for me.


There are filters for kids and indeed it wouldn't be bad in any way if there were various filters on Youtube, Facebook etc., they just ought to be voluntary (or at most imposed by one's parents).

And... this isn't going to change "kids shows on Netflix"...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: