Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I dont think the problem is automation...

It's people's expectation for it to be perfect, and the egoic drive to blame someone when something goes wrong. There was no reason for the hype around this story... an AI determinator had a false positive. Thats not google attacking the videos, thats a technical issue and it needs to have zero feelings involved because the entire process happened in a damned computer incapable of feelings...

But everyone needs to feed their outrage porn addiction...



It's not a technical issue. Software is not yet capable of accurate content detection, and even if it were, it's not clear whether this sort of thing should be automated. It's not like google can just change a few lines of code and the problem is gone.


> It's not a technical issue. Software is not yet capable of accurate content detection,

Your second sentence is a technical argument, which makes your first a lie. Obviously Google disagreed, which is why they put this system into place. And if they were wrong about that they were wrong for technical reasons, not moral ones.

I mean, you can say there's a policy argument about accuracy vs. "justice" or whatever. It's a legitimate argument, and you can fault Google for a mistake here. But given that this was an automated system it's disingenuous to try to make more of this than is appropriate.


If you just stare at the words and ignore my meaning, sure. But saying this is a technical problem is like saying that climate change is a technical problem because we haven't got fusion reactors working yet.


Then I don't understand what your words mean. Climate change is a technical problem and policy solutions are technical.

My assumption was that you were contrasting "technical" problems (whether or not Google was able to do this analysis in an automated way) with "moral" ones (Google was evil to have tried this). If that's not what you mean, can you spell it out more clearly?


Is there any problem you wouldn't frame as technical then? If the software isn't anywhere close to capable enough to do this task and YouTube decides to use it anyway that is a management problem. Otherwise literally every problem is technical and we just don't have the software to fix it yet


Sure: "Should Google be involved in censoring extremist content?". There's a moral question on exactly this issue. And the answer doesn't depend on whether it's possible for Google to do it or not.

What you guys and your downvotes are doing is trying to avoid making an argument on the moral issue directly (which is hard) and just taking potshots at Google for their technical failure as if it also constitutes a moral failure. And that's not fair.

If they shouldn't be doing this they shouldn't be doing this. Make that argument.


The software makes literally millions of correct calls every day, both positive and negative.

I'd say it's pretty capable.

Human raters are a fucking nightmare of inconsistency and bias. I'd guess this is more accurate at this point, and is only going to improve.


I would argue climate change is a political problem.

Policy solutions are political.

A policy is a deliberate system of principles to guide decisions and achieve rational outcomes. A policy is a statement of intent, and is implemented as a procedure or protocol. - https://en.m.wikipedia.org/wiki/Policy


If you believe climate change is a technical problem then there isn't much point continuing this discussion. Using that logic you could claim that any problem is technical because everything is driven by the laws of physics.


The point is, there will be false positives, there is no reason to get upset and hurt over them...

There is no perfect system. If its automated, there will be false positives (and negatives), if there is a human involved, you have a clear bias issue, if there is a group of humans involved, you have societal bias to deal with...

There is no perfect system for something like this, to the best answer is to use something like this, that gets it right most of the time... then clean up when it makes a mistake. And you shouldn't have to apologize for the false positive, people need to put on their big boy pants and stop pretending to be the victim when there is no victim to begin with...


This is the exact same argument for "stop and frisk", and that is just totally NOT OK.


It's not the exact same argument because stop and frisk is not automated.


If isn't the same process being defended, but I clearlt didn't claim that: the argument used to defend the different processes, however, is the same. This "put on your big boy pants" bullshit is saying that people should accept any incidental harassment because false positives are to be tolerated and no system is perfect, so we may as well just use this one. If the false positives of a system discriminate against a subset of people--as absolutely happens with these filters, which end up blocking people from talking about the daily harassment they experience or even using the names of events they are attending without automated processes flagging their posts--then that is NOT OK.

https://www.washingtonpost.com/business/economy/for-facebook...

https://www.lgbtqnation.com/2017/07/facebook-censoring-lesbi...


Thats exactly the OPPOSITE of stop and frisk.

1) Stop and frisk is BIASED heavily on race becuase its a HUMAN making the choice...

2) Stop and Frisk is the GOVERNMENT, and therefore actually pushes up against the constitution.

How do you see these things are remotely the same?


The false positives are not random: they target minorities; these automated algorithms designed to filter hate have also been filtering people trying to talk about the hate they experience on a daily basis. They keep people from even talking about events they are attending, such as Dykes on Bikes. It is NOT OK to tell these people to "put on their big boy pants" and put up with their daily dose of bullshit from the establishment.

https://www.washingtonpost.com/business/economy/for-facebook...

https://www.lgbtqnation.com/2017/07/facebook-censoring-lesbi...


On one hand, one problem with automated systems is that they're perfectly happy to encode existing biases.

On the other, the Alphabet family don't have the support systems to clean up when they make a mistake.


Your whole premise is wrong, because the final decisions were made by humans. But even if they weren't, you're still mistaken. If you write a program to do an important task, it is your responsibility to see that it's both tested and supervised to make sure it does it properly. Google wasn't malicious here, but it was dangerously irresponsible.


From the article:

"Previously we used to rely on humans to flag content, now we're using machine learning to flag content which goes through to a team of trained policy specialists all around the world which will then make decisions," a spokeswoman said..."So it’s not that machines are striking videos, it’s that we are using machine learning to flag the content which then goes through to humans."

"MEE lodged an appeal with YouTube and received this response: 'After further review of the content, we've determined that your video does violate our Community Guidelines and have upheld our original decision. We appreciate your understanding.'

Humans at YouTube made the decisions about removing videos. Then, on appeal they had a chance to change their minds but instead confirmed those decisions. Then, because of public outcry, YouTube decided it had been a mistake. 'The entire process happened in a damned computer incapable of feelings' is inaccurate.


Read the article. It says that humans made the final decisions.


[Disclaimer: I'm not a youtuber, so my knowledge is only 2nd hand]

Aside from but related to this story, many people are making a living off of YouTube ad revenues, and the AI is unpredictable in how it will respond in terms of promoting your video content on the front page, as links from other popular videos, and so forth. I think it's also unknown how the AI categorizes the content appropriateness of videos to advertisers, which if categorized the wrong way leaves your stuff unmonetizable.

Basically, people are throwing video content up, but have no way to properly have a feedback loop to gauge whether or not they violate the "proper" protocols that the AI rewards. This really is a problem of automation using (presumably) trained statistical rules where nobody really knows what specifically influences the decisions about their videos.


It is the People's expectation that it be perfect. Once they have determined that there is something badly wrong going on in the video destroying it is a violation of U.S. Code § 1519 (destroying evidence with intent). They had better have backups.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: