It's not solely that it's pragmatically impossible - it's that IMO it's not a free speech value. The government should not be in the business of telling private entities, regardless of size, that they are obligated to republish speech they don't agree with. Doing so may be a very important social/cultural norm but it shouldn't be a legal one.
(I think you can carve out a reasonable exception for the fairness doctrine and the equal-time rule for broadcast radio/TV based on the fact that spectrum is limited, and even imperfect attempts by the government at ensuring fair allocation of spectrum are better than none. But the internet is not limited in the same way; anyone can start a discussion forum without a resource allocation from the government. For the same reason, newspapers and magazines don't have anything like the fairness doctrine and never did - for about as long s there have been newspapers, anyone could start a new, competing newspaper, so there was little need to make a rule that everyone had the right to get their articles published in the local newspaper.)
Re a standardized API for keeping Section 230 - I still maintain that ideological neutrality is completely unrelated to Section 230, which is about directing the liability for speech that's already illegal.
Proposals to make Section 230 related to ideological neutrality are about weaponizing the threat of people making illegal speech to coerce websites to do things. I think that's a lot worse as a matter of policy than directly telling the websites what to do, if that's your actual goal.
Here's a thought experiment: suppose you have a group of 1000 honorable people who would never post libel/threats/copyright infringement/whatever. If I run a web forum that's restricted to these people, nothing about Section 230 can impact me, because they're never going to do anything that will incur legal liability for themselves or me. If 500 of those people are pro-abortion-rights and 500 are anti-abortion-rights and I restrict the forum to one of those subsets, that doesn't change the analysis - I'm still not going to be affected.
The only way Section 230 becomes relevant is if a couple of those people are dishonorable (and also boneheaded) and want to post illegal speech. Then they incur liability for themselves, of course, but if I lose Section 230 protections and I fail to moderate their speech, I also incur liability.
But the ability of those people to post illegal speech on my forum is clearly not a public policy goal - their speech is already illegal. Sure, there will always be a few such people in the world, but the law has, until now, taken the opinion that people shouldn't do that. Adding a new law that relies on people continuing this illegal behavior for it to have the right incentive seems like a poor plan: it is a complicated weapon and likely to work poorly in practice too.
If you want to make a rule that large websites cannot operate at all unless they are content-neutral in some definition, do that instead of merely making them subject to legal risk. But then you have to figure out exactly how setting up those rules is compatible with the right of private entities to engage in free speech and association. (And I think having to figure that out is a good thing.)
(I think you can carve out a reasonable exception for the fairness doctrine and the equal-time rule for broadcast radio/TV based on the fact that spectrum is limited, and even imperfect attempts by the government at ensuring fair allocation of spectrum are better than none. But the internet is not limited in the same way; anyone can start a discussion forum without a resource allocation from the government. For the same reason, newspapers and magazines don't have anything like the fairness doctrine and never did - for about as long s there have been newspapers, anyone could start a new, competing newspaper, so there was little need to make a rule that everyone had the right to get their articles published in the local newspaper.)
Re a standardized API for keeping Section 230 - I still maintain that ideological neutrality is completely unrelated to Section 230, which is about directing the liability for speech that's already illegal.
Proposals to make Section 230 related to ideological neutrality are about weaponizing the threat of people making illegal speech to coerce websites to do things. I think that's a lot worse as a matter of policy than directly telling the websites what to do, if that's your actual goal.
Here's a thought experiment: suppose you have a group of 1000 honorable people who would never post libel/threats/copyright infringement/whatever. If I run a web forum that's restricted to these people, nothing about Section 230 can impact me, because they're never going to do anything that will incur legal liability for themselves or me. If 500 of those people are pro-abortion-rights and 500 are anti-abortion-rights and I restrict the forum to one of those subsets, that doesn't change the analysis - I'm still not going to be affected.
The only way Section 230 becomes relevant is if a couple of those people are dishonorable (and also boneheaded) and want to post illegal speech. Then they incur liability for themselves, of course, but if I lose Section 230 protections and I fail to moderate their speech, I also incur liability.
But the ability of those people to post illegal speech on my forum is clearly not a public policy goal - their speech is already illegal. Sure, there will always be a few such people in the world, but the law has, until now, taken the opinion that people shouldn't do that. Adding a new law that relies on people continuing this illegal behavior for it to have the right incentive seems like a poor plan: it is a complicated weapon and likely to work poorly in practice too.
If you want to make a rule that large websites cannot operate at all unless they are content-neutral in some definition, do that instead of merely making them subject to legal risk. But then you have to figure out exactly how setting up those rules is compatible with the right of private entities to engage in free speech and association. (And I think having to figure that out is a good thing.)