The problem with this narrow definition is that in it you don't have the right to create your own website, nobody is forced to sell you the IP, hostname or bandwidth, and to the extent that they are it's because letting private companies dictate public discourse is a bad idea.
There's a fine line between not amplifying someone and silencing them, and when the choices of very few privately run websites affect who gets heard and who not then we should be wary about them amplifying harmful speech and equally wary about them silencing speech harmfully.
I reject the notion that this is a narrow definition. In the US, it's the _standard_ definition that was widely accepted until the recent advent of the "but muh free speech" people on twitter and other social media.
I'm also quite skeptical of the "slippery slope"-style argument regarding IP connectivity. The number of available ISPs, web hosts, domain registrars, etc is pretty large. There are even web hosts that have an explicit policy of letting you host _any_ content you want as long as it's not against the law in their jurisdiction[0]. And they'll even host it on a subdomain of theirs, if for some reason you have trouble getting a hostname for your hateful or crazy (but legally protected from censorship by the government) blog. And if laws are ever made restricting what those web hosts can do, then we're getting into the realm of the government restricting free speech, which is a different conversation entirely.
Regarding your second paragraph, the right to "get heard" is not a right guaranteed to anyone, at least not in the US. If you are spewing garbage, no one should be forced to hear it.
(caveat: I'm in the US, so my opinions are US-centric)
> I'm also quite skeptical of the "slippery slope"-style argument regarding IP connectivity. The number of available ISPs, web hosts, domain registrars, etc is pretty large.
We've already seen the goalposts move when AWS and CDNs were dropping politically unpopular clients.
If it helps, we're already pretty close to the end of the slope. There's a very limited number of last-mile ISPs, so we're only one Twitter mob / protest away from Comcast / Verizon / Cox / AT&T holding press conferences about how they're blocking politically problematic domains and IP addresses. Then it'll only be tech-savvy users with VPNs that can access "free speech," at least until those become the target of the mob, too.
I disagree that we're close to the end of the slope, but I guess if you're right we'll find out soon enough.
I'm not a libertarian by any means but I do have some amount of faith that if what you're describing comes to pass, the free market will provide alternatives, if demand exists. VPNs are one such alternative.
I truly believe that it's harmful to society to guarantee free reach to everyone. It's kind of like the paradox of tolerance, if you've heard of that -- if private entities are barred from moderating content on their systems, the discourse will devolve more than it already has into conspiracy, hate, and other forms of unwanted content.
I don't think that the argument is that no platforms should be able to moderate. Moderation is a high value activity that is hard to do well.
The argument is that moderation should be done by publishing companies and who face liability for their content. It should not be done poorly, en-mass by platform companies who do it at scale using automation and don't face legal liability when they mess up.
The only exception I see to this is to allow community organized and run moderation for noncommercial communities.
> If publishing companies were liable for their content wouldn't they censor more?
They would - and and they are. That's why it's easier to publish fanfic on the internet than with an actual publisher. Tumblr is not on the hook for unauthorized titillating usage of copyrighted Disney characters, but HarperCollins would get sued to bankruptcy if they attempted the same. This is why the calls to repeal section 319 is idiotic - it will lead to more "censorship"
Publishing companies are already liable for their content.
The argument is that, instead, these other, non-publishing, major communication platforms should be treated how we run other major communication platforms in the past, such as the telephone network.
We have existing laws, that could be extend to cover other communication platforms.
Telephone companies have been required to do certain things for decades, and the world hasn't collapsed because of it.
> In the US, it's the _standard_ definition that was widely accepted
citation needed
> There are even web hosts that have an explicit policy of letting you host _any_ content you want as long as it's not against the law in their jurisdiction
perhaps, but how much pressure do you think they could take, if pressured to take down your content by other private individuals & corporations?
> the right to "get heard" is not a right guaranteed to anyone, at least not in the US. If you are spewing garbage, no one should be forced to hear it.
i don't think anyone is claiming that there is or should be a right to be heard. this is an issue of control over who can be heard, and the distinction is important.
for example, in a "free" twitter, i could post some racist tirade, and expect it to gain no traction/retweets from my followers & some random others. people might see it, but it is principally no different to making the same speech in the city center: i'm going to be heard, but no one is going to listen.
we're all talking in extremes here too, which really isn't helping. yes, moderated platforms can remove racism, abusive content, etc., but they can (and do) also remove regular speech: more realistically, above, i would have been more likely tweeting about the lab-leak hypothesis of covid, back in the time where any proposed cause other than the wet-market exposure hypothesis was being labeled as racist. do you think people that have been deplatformed/decried/cancelled for opinions like that were retroactively recognized as being legitimate? if so, where's the profit motivation in that?
Uh, the US constitution and a few centuries of case law?
> perhaps, but how much pressure do you think they could take, if pressured to take down your content by other private individuals & corporations?
The one example host I gave has been around 20 years. I trust that they have a good legal team and have faced mobs of angry people before, and that they'll continue to be around for a while longer.
One problem that I didn't bring up in my original post is that as soon as someone can be heard on twitter, their content is subject to algorithmic manipulation. So someone's fringe opinion could be broadcast to thousands or millions of eyeballs and made to seem like much more of a mainstream opinion like than it actually is.
In a perfect world, where no manipulation is possible, I do agree with you that making sure people can be heard is the correct solution. But until or unless we get there, my opinion is that letting the platform have leeway to moderate content is the best path forward. If people tweeting about covid lab leaks (which I'm still not sure would be considered a non-fringe opinion in 2022) get caught up in that, then that sucks for them, but the alternative is worse. They are still free to set up their own site to discuss their theories.
> Uh, the US constitution and a few centuries of case law?
i'm sorry, i interpreted your point to broadly be "free speech with limits", which i understand to be at-odds with the constitutional definition.
> The one example host I gave has been around 20 years.
no offense intended to them when i say i don't recall ever hearing about them in any setting, controversial or otherwise, which would lead me to believe they haven't experienced significant pressure to remove anything.
> [...] their content is subject to algorithmic manipulation
yes, i agree. "the algorithm" makes astroturfing much easier to perform and be effective.
> In a perfect world [...]
well then there's a bit of a bootstrapping problem here, no? principles like free-speech were idealized and create in order to make the perfect world (for some values of "perfect"). hell, in the "perfect" world, free-speech wouldn't even need protection. i don't think it's sensible to mandate a principle after the fact.
> which I'm still not sure would be considered a non-fringe opinion in 2022
there is no scientific consensus yet - surprise surprise - but we are at least now talking about it[1].
> i'm sorry, i interpreted your point to broadly be "free speech with limits", which i understand to be at-odds with the constitutional definition.
Ugh, my main point is that the concept of "free speech" in the US is not relevant at all to the question of whether a private entity can remove you from their platform for saying something they don't like.
You can claim that in the colloquial sense, the phrase is used more liberally to mean any censorship whatsoever, and that may be true. But in my opinion that is conflating two concepts, and those doing so are either 1) confused or 2) being deliberately dishonest by trying to smuggle some sense of constitutional/government mandate into the conversation.
> Ugh, my main point is that the concept of "free speech" in the US is not relevant at all to the question of whether a private entity can remove you from their platform for saying something they don't like.
and i'm saying that you appear to treat social media and digital identity as some superfluous luxury that can be revoked without consequence from an individual as punishment (or for more dubious reasons), much like how republicans view health care and social welfare: with the notable exception of the US and one or two others, every country recognizes that private corporations must provide health care (or the means to it - i'm talking about the equipment, education, etc. being provided largely by non-governmental institutions), and citizens must be allowed to access to it regardless of who they are and what they've said, or even done.
yes, you are correct that the constitution does not prevent private corporations from removing content that they do not like, that is the point here. there is no question over the legality of such removals, and i don't think anyone here has tried to raise one.
i'm not trying to smuggle constitutional/government mandate, i'm explicitly trying to discuss the notion of whether or not it should exist.
> the concept of "free speech" in the US is not relevant at all to the question of whether a private entity can remove you from their platform for saying something they don't like.
You seem to be confusing "free speech", the concept, with "free speech", the legal right.
Most codifications of the legal right limit themselves to protecting against certain types of government interference.
However, the concept itself is absolutely not limited to contexts where censorship is imposed by a government. To try to impose this limitation is 1984-style thought policing that tries to remove existing language to control what can be said.
I'm not talking about some colloquial meaning, but the core meaning of the concept of "freedom of speech".
I think we are actually saying the same thing, and my language was imprecise, so I apologize.
My point is that in the US, the narrower legal/constitutional concept of free speech is often implied, inadvertently or deliberately, when people are actually only referring to it in the broader sense that you describe. For example, a banned Twitter user might say things like "Twitter is a disgrace to democracy"[0] which confuses others into thinking there is some constitutional or legal harm being done when in fact there is none.
I have no problem with having a debate about whether the core concept of free speech is a universal right that should be guaranteed everywhere (surprise: I don't think it should be a universal right and I think it's downright dangerous to society to force all private entities to respect it). But I see the two meanings get confused so much that I felt a need to call it out.
> and I think it's downright dangerous to society to force all private entities to respect it.
I would whole heartdly agree with that, but only because you added "all".
I think just as there is a balance in placing limitations of corporate freedom of association just like placing limits on free speech.
I do think that free speech is valuable enough that we should carefully consider placing restrictions on how and why large, oligopolistic corporations can exercise their right to freedom of association.
I think a lot of this can be solved with a "user's bill of rights" that protects users from arbitrary and capricious enforcement of nebulous terms by service providers.
I think most of the rest of this would be ideally solved by narrowing or eliminating the types of moderation a corporation can engage in while maintaining liability protection under section 230. Possibly with language giving special exemptions to community run moderation.
> for example, in a "free" twitter, i could post some racist tirade, and expect it to gain no traction/retweets from my followers & some random others.
Nah, you'd get a whole bunch of followers who are happy to see someone say the quiet parts out loud and would retweet.
Without social media, overt racists have to meet in private and the effects are local. With social media, they get a microphone that reaches the world and it spreads worldwide.
Isn't it interesting that many otherwise intelligent people don't seem to understand this? I don't know _how_ we solve this problem, but the sheer amount of people who refuse to see this as _a_ problem (and who should know better) is frankly astonishing and scary.
Just as there are people that think that everything is racist, there are people that think that nothing is racist.
I saw an interview where even infamous neo-Nazi Richard Spencer claimed to not be a racist, he just doesn't think races should intermingle, and that white neighborhoods should stay white.
> for example, in a "free" twitter, i could post some racist tirade, and expect it to gain no traction/retweets from my followers & some random others. people might see it, but it is principally no different to making the same speech in the city center: i'm going to be heard, but no one is going to listen.
This is the biggest misconception many proponents of absolute free speech have. Many movements, both good and bad, have begun in social media (Twitter / Facebook, etc.). Everything from the Arab Spring, to BLM, to Unite the Right in Charlottesville and January 6th, were at least in part organized on social media. Just because _you_ don't think an idea is worth listening to, does NOT mean that there aren't hundreds or thousands of people who disagree and will listen. I don't know what the solution to this problem is, but to assume it's not real is just plain wrong.
I think they're saying that private companies aren't forced to actually rent you out a server, in which they aren't, but that's simply those companies' own free speech at work allowing them to choose who to associate with. The same freedom to not rent out servers to some racist/twitter canceled person gives them the freedom to not associate with people who want to host vile and disgusting porn on their servers.
There's a fine line between not amplifying someone and silencing them, and when the choices of very few privately run websites affect who gets heard and who not then we should be wary about them amplifying harmful speech and equally wary about them silencing speech harmfully.