Hacker News new | past | comments | ask | show | jobs | submit login

Parler was never a "free speech" platform. This was always a lie, they heavily moderated their content. I've always felt like Parler was just a lazy attempt to make some cash by pulling some users away from Twitter claiming they could do what they want, it was never more or less than that.

I'm interested in knowing how you think a real "free speech" platform can actually work, however. We have message boards that do this and they are just toxic cesspools. The idea of an online "public square" that isn't that sounds impossible. How many public squares are reaching millions of people instantly?




Whenever the discussion about Parler being a “free speech” platform comes up, I feel compelled to point out that they banned the DevinNunesCow parody account.

Parler was more than happy to moderate speech, even political speech, as is their right as the platform creator. Their breathless claims about being free speech absolutists is absolute nonsense.


I think Parler's angle was to become the home for popular conservative commentators. Parler continually suggested Hannity, Levin, D'Souza, etc as accounts to follow, even if you blocked those accounts.


> I'm interested in knowing how you think a real "free speech" platform can actually work, however. We have message boards that do this and they are just toxic cesspools. The idea of an online "public square" that isn't that sounds impossible. How many public squares are reaching millions of people instantly?

I'd like to know the answer to this, too. I wonder if the reason why they turn into toxic cesspools is precisely because the only people who use free speech platforms are the people who were kicked off the others.

If you accept that this is plausible, then is it feasible that the more reasonable folks that just want to talk about politics in a less divisive manner (or maybe not even politics at all!) might help bring down the temperatures if everyone was swimming in the same pool instead of just a few extreme viewpoints forced to move into the same swamp? (NIMBY!)

This would also be in keeping with that classically liberal axiom, "The remedy is more speech, not enforced silence." (Supreme Court Justice Louis Brandeis, the Democratic justice who is credited with creating the "Right to Privacy" in 1890: https://en.wikipedia.org/wiki/Louis_Brandeis)


> I wonder if the reason why they turn into toxic cesspools is precisely because the only people who use free speech platforms are the people who were kicked off the others

You don't have to wonder, we've seen this time and time again with virtually every open community. Without tireless moderation, the swamp grows.

In other words, the majority of people who use free speech platforms have already answered your question: they've shown themselves unable to co-exist with (a much larger number of) reasonable folks and were kicked out.


You're not actually answering the question.

Suppose that the dominant platforms (e.g. Facebook) are not free speech platforms. They boot off a hundred thousand people. 10% of them actually deserved it and are militant jackasses who ruin everything.

Now someone else creates a "free speech" platform. Everybody is allowed in. Well, 80% of the initial users are going to be a subset of the ones who got kicked off of the incumbent platform, and 10% of those are jackasses, so your platform is now 8% jackasses. That's a huge percentage and it's going to immediately turn into a dumpster fire because the jackasses will drive out ordinary people and become an even larger percentage. There are plenty of instances of this happening, e.g. Voat.

But suppose you go the other way. Somehow get a large number of ordinary users. Now the jackasses are only 0.5% of the users. Combine this with something like a voting system so that nothing is ever actually removed, but spam and fascism end up at the bottom of the feed where nobody sees them by accident.

That has the potential to work. The key is to somehow get enough users to dilute the jackasses before they take over, e.g. because the incumbents overreached and a large number of non-jackasses are moving in protest.


I did answer, because the internet started off as your proposed experiment. It didn't work. They didn't get diluted, they just got louder, circumvented, harassed and escalated. They aren't accidentally toxic, they are actively / aggressively toxic.

Moderation didn't come before toxicity it came in response to it. Therefore, moderation doesn't cause / focus toxicity.

If you want to address this, you need to look at education.


And moderation came _very quickly_. Usenet started seeing significant use in 1983. The first moderated Usenet group was created in 1984 (insert mandatory weak Orwell joke). And Usenet was eventually largely replaced by very heavily moderated webforums, and then by things like reddit where the popular subreddits that people actually want to use are mostly fairly heavily moderated.

It turns out that people don't, as a general rule, actually enjoy using totally unmoderated fora; they tend to quickly fill with spam and awful stuff.


> I did answer, because the internet started off as your proposed experiment. It didn't work.

It worked great for multiple decades until "social media" applied algorithms that promoted controversy (i.e. anger-inducing hyperbole and conspiracy theories) to increase "engagement" and sell more ads.

That you can find an ASCII swastika or goatse on Slashdot which is instantly downvoted to -1 (but not actually removed from the site) was never a real problem. That Facebook put QAnon at the top of your mom's feed was a major problem.

But then we get calls for censorship as a response to problem created by bad moderation.

Notice that there is a difference between voting (where the community determines prioritization in a decentralized way but nothing is rendered unavailable) and censorship (where some fallible central authority is deciding what people are not allowed to know).


That might work. Except these platforms are about getting you to spend more time on them. Thus they threw in some AI to decide what to show people.

Turns out outraging people increases engagement so the “jackasses” get amplified.


Isn’t Reddit pretty much proof that, at least of you allow users to self-select into groups, this doesn’t work?

Arguably FaceBook is too.


Isn't Reddit significantly less toxic than Twitter and Facebook?


Precisely - the CEO literally hung out on Discord servers hunting for people to ban on Parler.


I hadn’t heard this one, and can’t find anything about this - can you give some links/pointers to this story so I can learn more about this please?



Perhaps Parler wasn’t a true free speech platform, ie one that moderated only the legal bare minimum. But, even if it’s true that it was friendly to only end of the political spectrum, at least it was a competitor to Twitter. The elimination of the sole significant direct competitor to Twitter, even one that represents a subset of views and not all speech, is the problem. It means there’s a tech oligopoly.

I’d love to see a decentralized free speech platform. I’m no blockchain evangelist, but it would seem like such technology could be used to build a real online “public square” as you say.


Check out Scuttlebutt [0].

[0] https://scuttlebutt.nz


> Parler was never a "free speech" platform. This was always a lie, they heavily moderated their content.

Exactly. Alt-left speech was not welcome on parler - only alt-right speech.


Alt-left isn't a thing in the US, perhaps you're falling for a false dichotomy. Alt-right is a euphemism for fascism/white supremism.


I'm not going to split hairs about the actual positions of radical leftists, because it's not really relevant to this conversation.

I will, however, point out, that much of the alt-right is terrified of the antifa boogieman... And that Parler, the platform of free speech was quite happy to ban anything that smelled of antifa... As well as much that could, be described as moderate leftism.

Its tolerance for speech only extends to a very narrow slice of the political spectrum.


Yes, but the notion of an existent, burgeoning "Alt-left" is due to fully attributable for folks being divorced from reality as a consequence of consuming lying media sources (especially those promoted on Parler). Also, "Antifa" isn't an organization. So classifying what white nationalists call "antifa" as boogeymen is surprisingly apt.

What gets blocked on Parler is more appropriately called "not supportive of white nationalism." That way you are sure to capture the entire space of content, whether Alt-right or not, instead of narrowing the universe to Alt-right and things the alt-right is encultured to fear but don't appear in reality.


I think you mean European / UK Centrists like that Nice Lib dem Alexandria Ocasio-Cortez :-) and that On nation tory Mr Obama


The correct antonym to Alt-Right is "Ctrl-Left"


And anyone in the middle should be called a spacebar.


Well a real free speech platform should apply strict automatic moderating rules to political threads and basically tease out the best arguments from all sides,


Maybe we should move away from free speech into pro speech. Sites that encourage a healthy discussion is what we really want. Anything that discourages that should be reduced in importance anything that encourages that promoted.

Patterns of negative conversation defined by fewer quality reply posts get negative points and vice versa for positive or patterns to be encouraged.

As things evolve and how people respond changes these patterns can change.


That is not free speech. That could be called something like "quality speech".

With free speech, as defined now, low quality arguments hit the same bar as high quality ones.

Yes we should strive to emphasise high quality arguments (even though I fear this is a near impossible task programmatically) but this is not what free speech means.

Tldr: according to free speech the dummies and assholes deserve to be heard.


I think HN strives to achieve this (as you point out, very high bar) and does a pretty good job with it overall. I do believe that dang's and the moderator teams' light touch has really aided a free-ranging discussion among people of wildly divergent viewpoints, and I for one appreciate this dynamic.

That we're even having such a high-quality discussion now really speaks volumes to the seemingly effortless way in which the HN team has developed into a great place to have such a conversation or debate. To be honest, I do not think that HN is always fair to everyone, but it does seem like the moderators do try to be.

With that said: Twitter and FB have wildly missed the mark.


I think any mildly political topic disproves your point.

Technological discussions mostly stay high quality but any politics(-adjacent) topic turns into an obvious struggle of moderation and quality.

The same goes for Twitter but the balance of topics is heavily skewed towards the latter.


I mostly agree, in fact there are certain topics that I will absolutely ignore and block as experience shows there is no room for intellectual discussion about them on here.

That being said, I do agree with GP on this point:

> To be honest, I do not think that HN is always fair to everyone, but it does seem like the moderators do try to be


[flagged]


> You’re just spreading what you’ve read in liberal news about Parler, without having used the app yourself.

You have no way of knowing if this is a true statement. He never said whether he used the app. Maybe he's speaking from personal experience with the app. You should focus on debating the argument and not introduce assumption as fact.


> Because the reality is, if the users didn’t like the post or comment, they simply downvoted it until it was buried.

Exactly. And the problem was the users did like it. Lin Wood called for the VP's execution just days before a real mob took the capitol chanting "Hang Mike Pence!". And it was hugely popular and shared everywhere. And that's a problem that needs to be fixed, not celebrated. Major thought leaders across the platform were fanning the flames, not engaging in moderation.

Maybe it's true that Parler had a scheme for moderation. But it was objectively a complete failure.


> I'm interested in knowing how you think a real "free speech" platform can actually work, however.

Let me give this a shot. There are two types of content that need to be removed: (1) spammy content that readers themselves don't want to see, and (2) illegal content that society doesn't want anybody to see. I think these two need to be addressed separately.

Illegal content includes copyright infringement, violations of NDAs, libel, slander, perverting the course of justice (violations of court orders), incitements to violence, consipracy to commit a crime, exposing troop movements, and in general anything that directly causes damages which could be assessed and recovered in a court of law. Racism, sexism, homophobia, "hate speech", advocacy of violence, falsehoods, trickery, promoting very dangerous ideas, lying about (or being incorrect about) vaccines, lying (or being incorrect) about who won an election, offensive speech, and jokes of all kinds are, at least in America, all forms of protected speech.

Illegal content must be taken down when ordered by a court to do so. AFAIK the platform isn't responsible to take anything down unless ordered by a court to do so, but I'm probably just ignorant -- PLEASE dont take legal advice from some random armchair pontificator on the internet like myself. Section 230(c)(1) protects the platform as they are not deemed to be the speaker.

Most people are terrible judges of whether something is illegal or not. For example, many congresspeople think Trump's Jan 6 speech was illegal (it wasn't) because it incited violence (it didn't). It didn't even advocate for violence. The idea that it was illegal is so far beyond Brandenberg to be laughable if it wasn't so serious... but back to the point... Attempts to proactively take down illegal content will invariably take down some legal content, probably lots of legal content, which is why a true "ideal" free speech platform would not attempt to do this... one must have humility and recognize one's near utter inability to determine what is and what is not illegal to any reasonable degree of accuracy during times of political strife like we find ourselves in today. The popular opinion right now is that all of us should enact justice upon each other according to our own personal interpretation of justice... which is just nuts when you think about it. Let's leave law enforcement up to the law enforcement professionals.

The first issue, the spammy content, would then be handled the same way we handle email spam. Plug into the spam filtering service of your choice, subscribe to rulesets, or write your own. I don't know of any platform that lets you plug in your own moderation, and so that's where I think they've all gone wrong - they either get flooded with spam or antisemites and become nearly useless to nearly everybody, or else some moderation team thinks it is their role to filter spam on behalf of the community and the community gets pissy that they filtered the wrong things. These two extremes are both wrong. Users can filter their own content if given the right tools. BTW, Section 230(d) requires providers notify their customers that parental control protections are commercially available... it puts the onus on the users to solve this for themselves... even way back when that law was written.

I'm going to add a third issue, flooding. That has a content neutral solution: throttling.


I assume you're getting downvoted at least in part because people disagree with you (I didn't downvote FWIW), but you're also factually wrong about Section 230 protections. Section 230 doesn't give blanket immunity to publishing illegal content: it only protects against civil violations, not criminal.


> Section 230 doesn't give blanket immunity to publishing illegal content: it only protects against civil violations, not criminal.

Of course you are correct, but that doesn't make me factually wrong. You are talking about publishing illegal content. But (c)(1) "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

And I missed DMCA takedowns.


How was it a lie? You didn’t have an account there. And you’re very mistaken on the “make some cash” theory.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: