"Humans are mammals, so they are mostly either male or female" misinformation or opinion?
"Israel is an apartheid state, even one of its former ministers said this" misinformation or opinion?
"Government-funded groups like Hasbara Fellowships and CAMERA make up an operation to spread propaganda and influence our elections" misinformation or opinion?
And who should decide one or the other? If you have the names of any experts we should appoint at Twitter, Facebook etc, it would be great to know.
My point is that Twitter or Facebook or whoever can exercise their own free speech to not propagate what you say, whether that be for the reason of “misinformation” or for the reason “contains the letter X”.
I agree, but I think they should lose legal protection for the content on their platform, because they've demonstrated the ability to moderate it. They should be legally liable for the spread of hate speech, financial scams, etc.
Lost money on a bitcoin scam on twitter? They exercised their free speech to display that scam, and should be liable for it.
Anti semitism or other hate speech? Guess who chose to publish it - Twitter.
Problem solved. Be a publisher or be platform. Be both, eat liability nobody else has special exceptions for.
but either way that's all a distraction from the original point, which is that the line between misinformation and being mistaken, or misinformation or truth, is arbitrary nonsense from politically active groups working to push their agendas, not anything related to truth.
Regarding the tangent, this sort of argument is exactly why 230 of the Communications Decency Act was enacted; court cases proved that, if a forum moderated their content (including profanity, hate speech, etc), they'd be open to all civil liability[0], while if they decided to moderate absolutely nothing they wouldn't be liable for the content the users posted[1]. This meant that companies either had to screen _everything_ by human review to ensure it wouldn't introduce undue civil liability, if they also wanted to moderate things like profanity, pornography, hate speech, etc. Congress didn't want this to be the internet of the future, so they passed section 230 to enable any service provider to moderate content for certain rules without being directly liable for all the civil crimes and torts users post on their service.
Safe to say that making them choose either extreme will lead to the eradication of social media as we know it, as there's no way Twitter or Facebook would let people post if doing so would require them to 100x their legal team to deal with all of the new lawsuits they're directly liable for.
>Safe to say that making them choose either extreme will lead to the eradication of social media as we know it
This is by far the best possible result. I don't care about social media, I care that some companies have been handed special dispensation against liability that no other publisher gets, strictly to create a service that isn't necessary, and when they've demonstrated they can arbitrarily remove any content they want already.
Let them close. The people who care about social media as it exists in the status quo are advertisers looking to build profiles on people who have no idea how much information they're leaking through the apps often pre installed on their phones.
Let them choose to be publisher or platform, and reap the rewards and consequences their choice brings, instead of creating this special class of companies who control speech but are immune from lawsuits. They and the legal position they've been given are a cancer on society.
"Israel is an apartheid state, even one of its former ministers said this" misinformation or opinion?
"Government-funded groups like Hasbara Fellowships and CAMERA make up an operation to spread propaganda and influence our elections" misinformation or opinion?
And who should decide one or the other? If you have the names of any experts we should appoint at Twitter, Facebook etc, it would be great to know.