I'm not sure I agree with that framing of the relationship between Section 230 and free speech.
For reference, here's the law:
> (1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
> (2) No provider or user of an interactive computer service shall be held liable on account of— (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
Section 230 was written to solve a very specific problem: Prodigy tried to moderate content on their site, and when someone posted libelous content and they didn't remove it, Prodigy was held legally responsible. CompuServe did not moderate content, and when someone posted libelous content, CompuServe was not held legally responsible. There was a perception that this was a counterintuitive result, and so Section 230 patched over it.
This has nothing to do with the ideological content of the communications. The messages in both cases were already unlawful because they were libelous - the question is whether CompuServe and Prodigy bore any liability (i.e., any obligation to not republish it), or just the end user.
Also, as written, Section 230 does not create an obligation to do anything. You don't have to moderate obscene, lewd, etc. content. You can choose not to moderate anything. The law simply says, 1, you the website operator aren't responsible for what people post, and 2, you don't gain any additional liability if you choose to moderate these things. It doesn't create any liability for not moderating them. The perception (which seems to have been empirically correct) is that Prodigy's approach would be more popular in the market than CompuServe's, and so the law should not create a legal incentive to act like CompuServe. The new law simply removed that incentive; it did not create a legal incentive to act like Prodigy.
The results of the two cases are only counterintuitive if you believe it is good for society for service providers to proactively moderate speech that is already illegal and err on the side of over-moderating. I don't think that belief is easy to reconcile with a strong pro-free-speech view - you're trusting a platform to be making decisions that would otherwise be made by courts, and you don't have nearly the representation/recourse/etc. you do with the legal system, if they decide to moderate you.
In particular, adding an obligation to protect free speech means that providers can only moderate content if they're confident it would result in legal liability. If they're not sure (suppose that, to pick a recent example, someone says that J. K. Rowling "cannot be trusted around children" - is this libelous, or a constitutionally-protected opinion?), they should err on the side of not moderating. But that matches the status quo ante Section 230. If you think that forums should err on the side of under-moderating, then it was perfectly fine to be in the legal situation where Prodigy's approach was riskier than CompuServe's.
Note also that neither of these scenarios does anything to discourage people from running forums where they tightly control what is said (ideologically or otherwise). If I want to host a personal blog with only my own posts, I can do that today, I could do that before Section 230, and I can do that essentially regardless of anyone's proposals (because I have a First Amendment right to say what I want and only what I want). If I want to invite my friends and only my friends to comment, I can do that too. If I want to invite the entire world to comment and I screen comments before posting, I can do that too (I also have a First Amendment right to free association). I'm still liable for unlawful posts (from libel to copyright infringement to whatever else), but if I'm willing to tightly moderate content, that's okay.
Another pro-free-speech opinion here, by the way, is that the real problem is with libel laws, and neither CompuServe nor Prodigy should have been held liable because the speech shouldn't have been illegal in the first place. This is entirely orthogonal to the "free speech" concern of perceived ideological bias.
It's only in the weird intersection of all of these things that the framing of Section 230 and ideological bias seems to make sense - you'd have to take the anti-free-speech view that ruinous penalties for libel are good, and then carve out an anti-free-speech exception that says that if you choose not to exercise your right to say what you want or associate with who you want, libel laws don't apply to you. And then, somehow, the two anti-free-speech approaches cancel out and turn into a free speech view - platforms are obligated to be non-ideologically-biased (in a sense defined by the government) for fear of arbitrary civil penalties.
(By the way, any free-speech reform to Section 230 really should start with repealing 230(e)(5), where FOSTA/SESTA partially removed Section 230's protections so that platforms became responsible for messages posted by users about "the promotion or facilitation of prostitution.")
Yeah, I agree with a lot of that. Especially the point that creating an obligation to protect free speech is, even if a nice value, pragmatically totally impossible to implement. That's sort of what I was trying to say in my original post (poorly I guess): "sure, maybe free speech would be nice, but honestly how? all the medicine seems worse than the cure and this is a sort of fundamentally thorny problem"
What do you think of the proposal that, to keep section 230, websites with a large audience must implement a standardized api and then folks would be allowed to create their own content filters on top of that api?
It's not solely that it's pragmatically impossible - it's that IMO it's not a free speech value. The government should not be in the business of telling private entities, regardless of size, that they are obligated to republish speech they don't agree with. Doing so may be a very important social/cultural norm but it shouldn't be a legal one.
(I think you can carve out a reasonable exception for the fairness doctrine and the equal-time rule for broadcast radio/TV based on the fact that spectrum is limited, and even imperfect attempts by the government at ensuring fair allocation of spectrum are better than none. But the internet is not limited in the same way; anyone can start a discussion forum without a resource allocation from the government. For the same reason, newspapers and magazines don't have anything like the fairness doctrine and never did - for about as long s there have been newspapers, anyone could start a new, competing newspaper, so there was little need to make a rule that everyone had the right to get their articles published in the local newspaper.)
Re a standardized API for keeping Section 230 - I still maintain that ideological neutrality is completely unrelated to Section 230, which is about directing the liability for speech that's already illegal.
Proposals to make Section 230 related to ideological neutrality are about weaponizing the threat of people making illegal speech to coerce websites to do things. I think that's a lot worse as a matter of policy than directly telling the websites what to do, if that's your actual goal.
Here's a thought experiment: suppose you have a group of 1000 honorable people who would never post libel/threats/copyright infringement/whatever. If I run a web forum that's restricted to these people, nothing about Section 230 can impact me, because they're never going to do anything that will incur legal liability for themselves or me. If 500 of those people are pro-abortion-rights and 500 are anti-abortion-rights and I restrict the forum to one of those subsets, that doesn't change the analysis - I'm still not going to be affected.
The only way Section 230 becomes relevant is if a couple of those people are dishonorable (and also boneheaded) and want to post illegal speech. Then they incur liability for themselves, of course, but if I lose Section 230 protections and I fail to moderate their speech, I also incur liability.
But the ability of those people to post illegal speech on my forum is clearly not a public policy goal - their speech is already illegal. Sure, there will always be a few such people in the world, but the law has, until now, taken the opinion that people shouldn't do that. Adding a new law that relies on people continuing this illegal behavior for it to have the right incentive seems like a poor plan: it is a complicated weapon and likely to work poorly in practice too.
If you want to make a rule that large websites cannot operate at all unless they are content-neutral in some definition, do that instead of merely making them subject to legal risk. But then you have to figure out exactly how setting up those rules is compatible with the right of private entities to engage in free speech and association. (And I think having to figure that out is a good thing.)
For reference, here's the law:
> (1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
> (2) No provider or user of an interactive computer service shall be held liable on account of— (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
Section 230 was written to solve a very specific problem: Prodigy tried to moderate content on their site, and when someone posted libelous content and they didn't remove it, Prodigy was held legally responsible. CompuServe did not moderate content, and when someone posted libelous content, CompuServe was not held legally responsible. There was a perception that this was a counterintuitive result, and so Section 230 patched over it.
This has nothing to do with the ideological content of the communications. The messages in both cases were already unlawful because they were libelous - the question is whether CompuServe and Prodigy bore any liability (i.e., any obligation to not republish it), or just the end user.
Also, as written, Section 230 does not create an obligation to do anything. You don't have to moderate obscene, lewd, etc. content. You can choose not to moderate anything. The law simply says, 1, you the website operator aren't responsible for what people post, and 2, you don't gain any additional liability if you choose to moderate these things. It doesn't create any liability for not moderating them. The perception (which seems to have been empirically correct) is that Prodigy's approach would be more popular in the market than CompuServe's, and so the law should not create a legal incentive to act like CompuServe. The new law simply removed that incentive; it did not create a legal incentive to act like Prodigy.
The results of the two cases are only counterintuitive if you believe it is good for society for service providers to proactively moderate speech that is already illegal and err on the side of over-moderating. I don't think that belief is easy to reconcile with a strong pro-free-speech view - you're trusting a platform to be making decisions that would otherwise be made by courts, and you don't have nearly the representation/recourse/etc. you do with the legal system, if they decide to moderate you.
In particular, adding an obligation to protect free speech means that providers can only moderate content if they're confident it would result in legal liability. If they're not sure (suppose that, to pick a recent example, someone says that J. K. Rowling "cannot be trusted around children" - is this libelous, or a constitutionally-protected opinion?), they should err on the side of not moderating. But that matches the status quo ante Section 230. If you think that forums should err on the side of under-moderating, then it was perfectly fine to be in the legal situation where Prodigy's approach was riskier than CompuServe's.
Note also that neither of these scenarios does anything to discourage people from running forums where they tightly control what is said (ideologically or otherwise). If I want to host a personal blog with only my own posts, I can do that today, I could do that before Section 230, and I can do that essentially regardless of anyone's proposals (because I have a First Amendment right to say what I want and only what I want). If I want to invite my friends and only my friends to comment, I can do that too. If I want to invite the entire world to comment and I screen comments before posting, I can do that too (I also have a First Amendment right to free association). I'm still liable for unlawful posts (from libel to copyright infringement to whatever else), but if I'm willing to tightly moderate content, that's okay.
Another pro-free-speech opinion here, by the way, is that the real problem is with libel laws, and neither CompuServe nor Prodigy should have been held liable because the speech shouldn't have been illegal in the first place. This is entirely orthogonal to the "free speech" concern of perceived ideological bias.
It's only in the weird intersection of all of these things that the framing of Section 230 and ideological bias seems to make sense - you'd have to take the anti-free-speech view that ruinous penalties for libel are good, and then carve out an anti-free-speech exception that says that if you choose not to exercise your right to say what you want or associate with who you want, libel laws don't apply to you. And then, somehow, the two anti-free-speech approaches cancel out and turn into a free speech view - platforms are obligated to be non-ideologically-biased (in a sense defined by the government) for fear of arbitrary civil penalties.
(By the way, any free-speech reform to Section 230 really should start with repealing 230(e)(5), where FOSTA/SESTA partially removed Section 230's protections so that platforms became responsible for messages posted by users about "the promotion or facilitation of prostitution.")