Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can flag this comment away, but I'll say this anyway:

If America doesn't crack down on Google's, Facebook's and Twitter's ability to "moderate" (censor) content at their discretion (today!), Western democratic society will come to an end. Sooner rather than later.

These services are utilities, sometimes monopolies, and must be regulated as such.



I agree protecting opinions I personally might not agree with is important.

However, I see no value at all in protecting calls to violence or protecting speech that is intentionally designed to deceive or manipulate.

Additionally, attributing and contextualising information must be allowed.

Finally, many scientific matters are settled (such as that climate change exists, that there was systemic discrimination of PoCs even long after abolishment of slavery and that Earth is not flat).

Stating that those debates are settled and that trying to "reopen" them does not do any good must be allowed.


Who decides what's designed to deceive?

The right thinks the left is trying to manipulate people and the left thinks the right is trying to manipulate people.

Also, why wouldn't I be able to talk against climate change? I know it's real but that's besides the point I'm trying to make. Why should I be stripped of that right? To me, it sounds like when you couldn't speak about religion or have any kind of doubt. Not only it's bad because it pure censorship but it creates a precedent to ban speech against what some people think it's settled.


"When the looting starts, the shooting starts" isn't deceptive. It's clearly a call to violence. It is 100% unambiguous.


Its in poor taste, but it's not your place to say that it has no value. That post has a lot of political meaning and promotes strong police response to non peaceful protests. People are just angry about the phrasing. This is far from having no value imo.


"The phrasing" is a historical quote that is connotated with calls to violence since the original social justice movement.

https://en.m.wikipedia.org/wiki/When_the_looting_starts,_the...


What are you implying, that it has no value because it advocates violence?

EDIT: To be precise why I'd disagree with that, promoting self defense advocates violence, but still has value to pretty much everyone. This is controversial because of the current climate, but saying it has no value is false.


Then report it to the authorities for investigation and prosecution.


Which are supposed to do what? Exercise censorship? Prosecute Trump?


It's a call for violence in the face of violence, which is exactly what used to be the obvious role of police up until this marvellous new age bestowed on us by 2020.


Are you sure? Because when Trump said that the last thing I thought was he was calling for violence. To me he was stating a fact. And as we've seen, people rioted, looted and some ended up using guns. Happened in 92 also. Remember the "roof Koreans"?


>However, I see no value at all in protecting calls to violence or protecting speech that is intentionally designed to deceive or manipulate.

Congratulations - you just allowed the South to secede and defanged every rousing political speech ever. You also prevented entrance of USA in WWI (media played huge role there)

By your definition in this day and age a leader cannot use social media to gather support for a war.


> By your definition in this day and age a leader cannot use social media to gather support for a war.

Yup.


>Finally, many scientific matters are settled (such as that climate change exists, that there was systemic discrimination of PoCs even long after abolishment of slavery and that Earth is not flat).

And who is it that decides when a scientific matter is "settled"? You? Trump? Zuckerberg? Bartolomeo Spina [0]?

>Stating that those debates are settled and that trying to "reopen" them does not do any good must be allowed.

With this single sentence you have highlighted the entire problem with this approach. "It is a settled scientific matter that Earth is the center of the universe and trying to reopen this debate will not be tolerated" is stunningly similar to what Copernicus was told.

0: https://en.wikipedia.org/wiki/Nicolaus_Copernicus#Controvers...


I don't believe that moderating away content and discussion of climate denial would actually result in climate change deniers changing their views. They'd simple disengage with the social networks that institute such moderation and instead seek out alternative mediums. They are plenty of other venues online and offline for climate change denying and other questionable content.

This could even create a business opportunity for new social networks with different moderation policies. That could backfire and create even stronger echo chambers. Imagine a social network that prohibits content in support of prochoice for abortion. The site administrators could simply argue that the issue has been settled and prolife is the only moral stance.


> I don't believe that moderating away content and discussion of climate denial would actually result in climate change deniers changing their views. They'd simple disengage with the social networks that institute such moderation and instead seek out alternative mediums. They are plenty of other venues online and offline for climate change denying and other questionable content.

One key effect here too: Even if being among other people doesn't turn them away from climate change denial, it can ground them in reality in other areas better.


Can you give examples of some kinds of opinions you're in favor of protecting? If you're excluding speech that you find intentionally designed to manipulate and speech that contradicts science you believe is settled, I'm not sure what's left beyond favorite ice cream flavors.


Science that 90+% of scientists believe it's settled.

I mean, if you got new empirical data to show that the overwhelming evidence is wrong and earth is in fact flat, please do submit a paper and get some feedback from the scientific community.

What I believe is harmful is to present worldwide scientific consensus on one hand and granny's gut feeling on the other hand as equally valid opinions - and then conclude that a topic is "disputed". This happens a lot in social media.


> However, I see no value at all in protecting calls to violence or protecting speech that is intentionally designed to deceive or manipulate.

Like all ads, 90% of selfies etc?


> Like all ads

Yup.

> 90% of selfies etc?

Dammit, you got me there! Alright, I'll make an exception for selfies.


I think its going to be a much harder line to draw than people assume. We already trust the big email providers (Microsoft, Google) to perform blanket spam filtering. Part of that working is that its non transparent, it works because its a black box that noise generators can't inspect. Where is the line between spam and valid speech? Individually, we have ceded control of filtering to the giants, because anything else is overwhelming. I am not sure I would be able to use any of these products if they didn't take a "block first, release from the filter later" approach.


There is a big difference there, in that I have control over that spam filtering, i have allow or disallow individual senders, etc.

So if I want to send content to 100 people, and all 100 people add me to an approved list well my content will get through

That is not how these social media systems work


I agree, its not fully analogous. But it would be easy to write a law that has unintended consequences.

Part of the problem with FB, Twitter, Youtube is that they are closed inoperable systems, a problem that doesnt hamper email.

Things like real name policies also suppress speech. If a law were in place preventing Facebook censorship, it would also need to prevent deplatforming and facebook deciding who can and cant use its platform. Blocking speech they dont agree with, is different that blocking all speech from a speaker that sometimes says things Facebook doesnt like.

There is also the issue of the newsfeed ranking algorithm itself. Is deranking something more than a thousand posts deep the same as censorship? They can basically control the spread of information, by allowing people to post it, but not allowing it to rise up the ranks. If said ranking exerted preference or bias, and applied to everyones feeds and all their groups, facebook can control the conversation without traditional censorship. At the same time, algorithmic news sorting basically IS the product, and thats what people go to it for.


> We already trust the big email providers (Microsoft, Google) to perform blanket spam filtering.

And that's the whole issue. Spam filtering should not be irreversibly bundled with the provision of an e-mail account. It can be a separate service, that you can opt-in to at your pleasure, or choose a different supplier to do your spam filtering. It could even be a plugin in your web-browser, automating the now manual "mark as spam" and "undo mark as spam" functions.


> You can flag this comment away

Why would this comment be flagged?

It's a reasonable statement, Twitter handles have replaced phone numbers in the age of online discourse. Telling someone to join a fringe social network is like saying people we can ban people we don't like from using the telephone because ham radio exists.


You forgot the part where people dox you for being a ham radio operator and try to get you fired from work because only right-wing extremists use ham radio.


Did this happen? A person who is a ham radio operator being doxxed because they were a ham radio operator? Or was it a person using ham radio to promote extremist viewpoints getting doxxed?


I haven't checked if it's happened before, but doxxing is trivial. You can look up anyone's name and address using their callsign on the FCC website:

https://wireless2.fcc.gov/UlsApp/UlsSearch/searchLicense.jsp


[flagged]


Obviously we ban accounts that post things like "you turnip", so please don't.

https://news.ycombinator.com/newsguidelines.html


Are the last two words in your comment really necessary?


Right, I was continuing the analogy.

One example of a community being labeled as right-wing extremists (or worse) is Gab.ai. Another Example is Zerohedge's comment section.

Was anyone doxxed for using these platforms? I don't know. I do know those platforms have been deplatformed.

There seems to be a concerted effort to label Facebook a place where extremists gather, lots of people virtue signaling about how they're not using Facebook because it's so toxic and full of Nazis. It's only a matter of time until it's a sin just for using Facebook.


> You can flag this comment away, but I'll say this anyway

Same here with the opposite point:

The government intervening and controlling private companies that distribute information is proven to bring down democracies. It literally constantly happens all around the world.

These are private companies that enable citizens to communicate with each other, and should be regulated as such (as little as possible).


The problem is in concentration of power: gov't and companies. When you have independent printing presses in every town, gov't should let them print (and refuse to print) whatever they like. If one org buys up all the newspapers and puts them under one corporate umbrella, their corp policies determine what the electorate gets to hear. Not acceptable.

The difference is not ownership but concentration, and on the network you have network effects on a scale never before seen. We have no problem with people not being able to say or read what they want in email because it is a federated system that anyone can join through any provider. If Facebook or Twitter were just an open protocol, the users could decide for themselves who to subscribe to, managing their own "safety". But instead, we have giant, centralized organizations being accused of bias from every side because they wield so much centralized control over information.

So, when companies are small, they should be able to say or not say whatever they want. As they get larger, they gradually surrender the power to regulate their own users. They can stay small, or they can release control--their choice. The can open the protocol and become just one provider in an open protocol (like email), and if they have fewer than, say, 10M users, they can totally offer the "service" of blocking incoming and outgoing info for those users who want it done for them, and those who think "no, thanks, I'll decide for myself whose feed to subscribe to" will be able to do so via other providers.


Hm a lot of good points for me to think about here.


Facebook, as a business, is primarily interested in making money and would love to avoid all these messy issues regarding what to censor so that it can focus on selling ads.

Therefore, FB would love to have a legitimate external authority, e.g. the government, mandating what's allowed or not. Unfortunately, no one is willing to do that dirty work, and thus forcing Facebook to do it themselves.

(Granted, I've simplified the matter quite a bit here by saying "Facebook, as a business". Facebook is also a collection of individual employees who have their own beliefs and do shape FB in their image.)


Who's forcing Facebook to engage in censorship in the first place? Why don't they just go on about the business of making money? They are immune from defamation/libel/slander so where's the issue?


They are not immune from boycotts like the one they are currently facing. That's why they are censoring certain types of content, because their advertisers don't want to be associated with it.


Absolutely. Also user base outrage. At some point if enough people quit FB it would be a problem because advertisers need an audience. So far the numbers show it hasn’t happened despite all the posts on hacker news about it, but it’s a concern.


The easiest way to crack down on that ability is enforcing a separation between the basic social networking platform and the management and moderation of whatever content is flowing over it. This is the model of federated services like Mastodon. It's more like pursuing "free hearing" (of voices you may or may not be interested in, and can thus follow or filter the way you like) than "free speech". It privileges the receiver of information over those who are sending it.


"These services are utilities, sometimes monopolies, and must be regulated as such."

Maybe we could just start with regulating The Internet as common carrier instead of an "information service".

Anyone who is disappointed with the content of the "speech" they see on a particular "network" (a website/endpoint), should study the case of Usenet. Disprove this hypothesis: Unmoderated "public discourse" on a computer network has a tendency to descend into a sewer.

The only difference today in terms of this phenomena is the number of users of the "network" (website/endpoint) or computer network (internet).

Maybe "networks" used for communication need to migrate to "mesh" instead of "hub and spoke". Perhaps websites were not meant to have billions of pages and be called "networks".


>Unmoderated "public discourse" on a computer network has a tendency to descend into a sewer.

A stream of sewage can still be shined into liquid gold by a sufficiently intelligent client, that chooses what to ingest and what to ignore.


Certainly the problem can be seen as one of clients. The "modern web browsers" are instruments for fueling the dumpster fire. For the web, I wrote my own "user agent". A client that has zero intelligence however it is fully under the user's control. The intelligence lies with the user, not the third party authors of the client (with their business purposes and conflicts of interest).


If Facebook didn’t censor their content and allowed all content, all the advertisers would leave. That’s why advertisers are abandoning it now because they aren’t doing enough. You’re not paying money to these social networks, you are not the customer.


I downvoted you because you made an extreme assertation with no supporting evidence. Also social media is not a utility or monopoly, no one needs to use them.


Regulated by whom? The devil is in the details.


By law. A simple law like those that govern utilities like phone lines, mail and Internet access. These exist in most western countries. Your phone conversations, letters and self-hosted personal website cannot be censored by anyone without a court order.


Simple to add to Section 230 a line that reads

"For the purposes of Section 230 immunity all platforms shall moderate content in a manner that conforms to the 1st amendment of the US Constitution as if they were a government agency. Any platform that chooses to suppress, moderate or take down 1st amendment protected speech shall be considered publishers of all user generated content and loose the liability protections granted by this section"


How about America regulates its actual utilities first? The problem isn't private companies regulating their properties as they see fit, the problem is Americas propensity for olicharchs, in every part of society. A people that so easily sells itself out to a single party is going to have the same problems again and again.


A private company can limit what you can put on their platform, end of story.


dont forget visa and mastercard monopolies doing the same thing


All wide publication systems have been moderated by the owner since the beginning of time, yet your prediction has not come to pass.


Not going to flag it because you are right.

Once you are a platform (FB, Twitter, Google, Reddit, etc...), you should only be allowed to "censor" things that are illegal.


To what extent should a platform have editorial discretion? What sorts of actions should Facebook be permitted to take to avoid becoming 4chan? What sorts of actions should Google be permitted to take to avoid turning in a pure pile of spam?

(For the latter, perhaps we should pretend that Google isn’t already mostly spam.)


This won't be a detailed answer but a principle: have as much control over each user as you like up to a certain scale followed by gradually decreasing per-user control if you keep gaining users. The company can choose whether to stay smaller and keep full control over all users or, say, federate their protocol to allow other providers to freely join the network-effects-dominated system, so users could decide for themselves which other users to subscribe to. The original company could then retain full control over their size-limited portion of the users, those still wanting "full service", by offering such benefits as "safety" from bad opinions as an incentive.

This would certainly apply to cases such as Amazon, where the bigger they got, the more helpful they would have to be to small competitors instead of putting them out of business.


Obviously these are not just black and white issues. There are lots of nuance here. However, I stand by the word of law being the overarching governing principle.

Let me ask you a question. What if the big 3 grocery stores in town decided not to sell you groceries unless you stop doing an activity they consider distasteful but is otherwise legal? Would you still be OK with it?


I totally agree with you. It’s a shame so many don’t understand why this is so important, instead seeking a quick political win through ill-gotten means. It’s also a shame that younger generations have somehow come to devalue principles like free speech.

Companies like these cannot be trusted to moderate neutrally and are a threat to society due to their size. Their organizational cultures can be taken over by a vocal/fanatical minority, and then weaponized against those outside their tribe. And this is exactly what has happened - for example consider this leaked video of Facebook moderators (https://www.realclearpolitics.com/video/2020/06/23/facebook_...), talking about deleting people with MAGA hats by flagging them for “terrorism”.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: