Hacker News new | past | comments | ask | show | jobs | submit login

No, this just leads to more censorship without any option to appeal.



We’re talking about Facebook here. You shouldn’t have the assumption that the platform should be “uncensored” when it clearly is not.

Furthermore, I’ll rather have the picture of my aunt’s vacation taken down by ai mistake rather than hundreds of people getting PSTD because they have to manually review if some decapitation was real or illustrated on an hourly basis.


> without any option to appeal.

Why would that be?

Currently content is flagged and moderators decide whether to take it down. Using AI, it's easy conceive a process where some uploaded content is preflagged requiring an appeal (otherwise it's the same as before, a pair of human eyes automatically looking at uploaded material).

Uploaders trying to publish rule-breaking content would not bother with an appeal that would reject them anyway.


Because edge cases exist, and it isn't worth it for a company to hire enough staff to deal with them when one user with a problem, even if that problem is highly impactful to their life, just doesn't matter when the user is effectively the product and not the customer. Once the AI works well enough, the staff is gone and the cases where someone's business or reputation gets destroyed because there are no ways to appeal a wrong decision by a machine get ignored. And of course 'the computer won't let me' or 'I didn't make that decision' is a great way for no one to ever have to feel responsible for any harms caused by such a system.


This and social media companies in the EU tend to just delete stuff because of draconian laws where content must be deleted in 24 hours or they face a fine. So companies would rather not risk it. Moderators also only have a few seconds to decide if something should be deleted or not.


> because there are no ways to appeal

I already addressed this and you're talking over it. Why are you making the assumption that AI == no appeal and zero staff? That makes zero sense, one has nothing to do with the other. The human element comes in for appeal process.


> I already addressed this and you're talking over it.

You didn't address it, you handwaved it.

> Why are you making the assumption that AI == no appeal and zero staff?

I explicitly stated the reason -- it is cheaper and it will work for the majority of instances while the edge cases won't result in losing a large enough user base that it would matter to them.

I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.

> That makes zero sense, one has nothing to do with the other.

Cheaper and mostly works and losses from people leaving are not more than the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.

> The human element comes in for appeal process.

What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time? Corporations don't exist to do the right thing or to make people happy, they are extracting value and giving it to their shareholders. The shareholders don't care about anything else, and the way I described returns more money to them than yours.


> I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.

Their copyright takedown system has been around for many years and wasn't contingent on AI. It's a "take-down now, ask questions later" policy to please the RIAA and other lobby groups. Illegal/abuse material doesn't profit big business, their interest is in not having it around.

You deliberately conflated moderation & appeal process from the outset. You can have 100% AI handling of suspect uploads (for which the volume is much larger) with a smaller staff handling appeals (for which the volume is smaller), mixed with AI.

Frankly if your hypothetical upload is still rejected after that, it 99% likely violates their terms of use, in which case there's nothing to say.

> it is cheaper

A lot of things are "cheaper" in one dimension irrespective of AI, doesn't mean they'll be employed if customers dislike it.

> the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.

It does not make sense to have zero staff in as part of managing an appeal process (precisely to deal with edge cases and fallibility of AI), and it does not make sense to have no appeal process.

You're jumping to conclusions. That is the entire point of my response.

> What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time?

AI isn't there yet, notwithstanding, if they did a good job 98% of the time then who cares? No one.


> Their copyright takedown system has been around for many years and wasn't contingent on AI.

So what? It could rely on tea leaves and leprechauns, it illustrates that whatever automation works will be relied on at the expense of any human staff or process

> it 99% likely violates their terms of use, in which case there's nothing to say.

Isn't that 1% the edge cases I am specifically mentioning are important and won't get addressed?

> doesn't mean they'll be employed if customers dislike it.

The customers on ad supported internet platforms are the advertisers and they are fine with it.

> You're jumping to conclusions. That is the entire point of my response.

Conclusions based on solid reason and evidenced by past events.

> AI isn't there yet, notwithstanding, if they did a good job 98% of the time then who cares? No one.

Until you realize that 2% of 2.89billion monthly users is 57,800,000.


lol "important". You're free to flock to some other platform that better caters to extremists.


Nobody has a right to be published.


Then what is freedom of speech if every plattform deletes your content? Does it even exist? Facebook and co. are so ubiquitous, we shouldn't just apply normal laws to them. They are bigger than governments.


Freedom of speech means that the government can't punish you for your speech. It has absolutely nothing to do with your speech being widely shared, listened to, or even acknowledged. No one has the right to an audience.


The government is not obligated to publish your speech. They just can't punish you for it (unless you cross a few fairly well-defined lines).


> Then what is freedom of speech if every platform deletes your content?

Freedom of speech is between you and the government and not you and a private company.

As the saying goes, if don't like your speach I can tell you to leave my home, that's not censorship, that's how freedom works.

If I don't like your speach, I can tell yo to leave my property. Physical or virtual.


If this was the case then Facebook shouldn’t be liable to moderate any content. Not even CSAM.

Each government and in some cases provinces and municipalities should have teams to regulate content from their region?


This has always been the case. If the monks didn't want to copy your work, it didn't get copied by the monks. If the owners of a printing press didn't want to print your work, you didn't get to use the printing press. If Random House didn't want to publish your manifesto, you do not get to compel them to publish your manifesto.

The first amendment is multiple freedoms. Your freedom of speech is that the government shouldn't stop you from using your own property to do something. You are free to print out leaflets and distribute them from your porch. If nobody wants to read your pamphlets that's too damn bad, welcome to the free market of ideas buddy.

The first amendment also protects Meta's right of free association. Forcing private companies to platform any content submitted to them would outright trample their right. Meta has a right to not publish your work so that they can say "we do not agree with this work and will not use our resources to expand it's reach".

We have, in certain cases, developed system that treats certain infrastructure as a regulated pipe that is compelled to carry everything, like with classic telephone infrastructure. The reason for that, is it doesn't make much sense to require every company to put up their own physical wires, it's dumb and wasteful. Social networks have zero natural monopoly and should not be treated as common carriers.


Not if we retain control and each deploy our own moderation individually, relying on trust networks to pre-filter. That probably won't be allowed to happen, but in a rational, non-authoritarian world, this is something that machine learning can help with.


Curious, do you have a better solution?


The solution to most social media problems in general is:

`select * from posts where author_id in @follow_ids order by date desc`

At least 90% of the ills of social media are caused by using algorithms to prioritize content and determine what you're shown. Before these were introduced, you just wouldn't see these types of things unless you chose to follow someone who chose to post it, and you didn't have people deliberately creating so much garbage trying to game "engagement".


I'd love a chronological feed but if you gave me a choice I'd get rid of lists in SQL first.

> select * from posts where author_id in @follow_ids order by date desc

SELECT post FROM posts JOIN follows ON posts.author_id = follows.author_id WHERE follows.user_id = $session.user_id;


How it’s different from some random guy in Kenia? Not like you will ask him to double check results.


That's a workflow problem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: