> I wonder if the reason why they turn into toxic cesspools is precisely because the only people who use free speech platforms are the people who were kicked off the others
You don't have to wonder, we've seen this time and time again with virtually every open community. Without tireless moderation, the swamp grows.
In other words, the majority of people who use free speech platforms have already answered your question: they've shown themselves unable to co-exist with (a much larger number of) reasonable folks and were kicked out.
Suppose that the dominant platforms (e.g. Facebook) are not free speech platforms. They boot off a hundred thousand people. 10% of them actually deserved it and are militant jackasses who ruin everything.
Now someone else creates a "free speech" platform. Everybody is allowed in. Well, 80% of the initial users are going to be a subset of the ones who got kicked off of the incumbent platform, and 10% of those are jackasses, so your platform is now 8% jackasses. That's a huge percentage and it's going to immediately turn into a dumpster fire because the jackasses will drive out ordinary people and become an even larger percentage. There are plenty of instances of this happening, e.g. Voat.
But suppose you go the other way. Somehow get a large number of ordinary users. Now the jackasses are only 0.5% of the users. Combine this with something like a voting system so that nothing is ever actually removed, but spam and fascism end up at the bottom of the feed where nobody sees them by accident.
That has the potential to work. The key is to somehow get enough users to dilute the jackasses before they take over, e.g. because the incumbents overreached and a large number of non-jackasses are moving in protest.
I did answer, because the internet started off as your proposed experiment. It didn't work. They didn't get diluted, they just got louder, circumvented, harassed and escalated. They aren't accidentally toxic, they are actively / aggressively toxic.
Moderation didn't come before toxicity it came in response to it. Therefore, moderation doesn't cause / focus toxicity.
If you want to address this, you need to look at education.
And moderation came _very quickly_. Usenet started seeing significant use in 1983. The first moderated Usenet group was created in 1984 (insert mandatory weak Orwell joke). And Usenet was eventually largely replaced by very heavily moderated webforums, and then by things like reddit where the popular subreddits that people actually want to use are mostly fairly heavily moderated.
It turns out that people don't, as a general rule, actually enjoy using totally unmoderated fora; they tend to quickly fill with spam and awful stuff.
> I did answer, because the internet started off as your proposed experiment. It didn't work.
It worked great for multiple decades until "social media" applied algorithms that promoted controversy (i.e. anger-inducing hyperbole and conspiracy theories) to increase "engagement" and sell more ads.
That you can find an ASCII swastika or goatse on Slashdot which is instantly downvoted to -1 (but not actually removed from the site) was never a real problem. That Facebook put QAnon at the top of your mom's feed was a major problem.
But then we get calls for censorship as a response to problem created by bad moderation.
Notice that there is a difference between voting (where the community determines prioritization in a decentralized way but nothing is rendered unavailable) and censorship (where some fallible central authority is deciding what people are not allowed to know).
You don't have to wonder, we've seen this time and time again with virtually every open community. Without tireless moderation, the swamp grows.
In other words, the majority of people who use free speech platforms have already answered your question: they've shown themselves unable to co-exist with (a much larger number of) reasonable folks and were kicked out.