Hacker News new | past | comments | ask | show | jobs | submit login

So you think if google really thinks you're a terrorist that they are obliged to go on supporting you? There's a big difference between having no idea terrorists use your platform, and actually supporting them by enabling those you believe to be terrorists.



Who is this "Google" person that "thinks" he is a terrorist? There is no actual human entity here that is doing any "thinking" at all. It is an automated system, that cannot be argued with or challenged by normal means.

Terrorism is an INCREDIBLY SERIOUS allegation. It should be dealt with by law enforcement and the courts. And the penalties for _false_ accusations, which is exactly what this is, should be severe as well. It can be life-ruining.


The entities that authored or authorized the system to identify the content as such are the persons that "think" so, because they wrote the definitions that identified him as such. You don't get to just make a system and totally disclaim all liability for what happens with it. If the people at google run the system, then they own what happens. In this case, they own the fact that they identified suspected terroristic activity and they should be held responsible if they don't stop it, at least until it can be reviewed.

Terrorism should not just be dealt with through the courts. If you think someone is committing violence against others your responsibility is to shut down all support of that person, you mustn't aid them in any way. There's no requirement that private entities wait for the courts before stopping commerce with suspected terrorists. If you are google, that means completely locking out all accounts that you identify as involved in terrorism.

The supreme court has held that when making statements of public concern, you haven't committed libel if you are acting without malice. It's pretty clear google wasn't acting with malice here, and that it is a great public concern if someone is actually amassing armored vehicles to use against others. And just because someone is a terrorist, doesn't mean you should or can act through the courts. If google believes someone is acting against non-US nationals in a foreign country, then there's very little the US courts could pragmatically do about it other than order entities in the US not to support them. Of course if evidence did emerge that google flagged that terrorist activity was happening, and they continued supporting that activity, I don't think that would look favorably during any prosecution.

Which, yet again, brings me to my original point, which is that google shouldn't even be putting themselves in the position to scan private data and open themselves up to the liability of having invade privacy and to suspect someone is engaging in crimes.


> If you think someone is committing violence against others your responsibility is to shut down all support of that person, you mustn't aid them in any way.

"If you think". What if you're wrong?


Intent matters. For instance, if a fed or informant asks to use your services and you think they are doing it for illegal purposes, such as terrorism, then you can be convicted (Even though the fed / informant never actually would have executed them, and it was a ruse to arrest you). If you come into my business, and I announce I believe you are engaging in terrorist activity, then the case is pretty much open and shut for my intent if I go on to support your ongoing concerns.

So if you're wrong, you merely didn't engage in commerce with someone you believe to be a terrorist. There's nothing wrong or illegal about that. The agreement for free services by google doesn't mean google can't end the service at any time, and I'd bet the boiler plate for any paid service has a pressure release valve to end the service if google merely suspects you of engaging in illegal activity. In short, Google owes you nothing, and they're not your slave to provide you service despite believing you're a criminal.


I'd say when you twig to google as a violent terrorist it should escalate to a real human to check before shutting the account down and forwarding it to the FBI.


And in the meantime before a human can check it, you think google will be completely without liability if they flag someone as a terrorist but let them continue on with their business, even aiding them with all of google services?


They most certainly should be, because an automated Google system having "flagged" a user as "a terrorist" means nothing in the eyes of the law. On the other hand, they should not in any way be without liability for wrongly suspending a user's account just because their automated system unjustifiably "flagged" them.


>they should not in any way be without liability for wrongly suspending a user's account just because their automated system unjustifiably "flagged" them.

A legal team of 1,000+ at google is betting that you're wrong. I have a feeling they came to a pretty logical decision that liability of aiding someone flagged of terrorist activity is greater than liability of shutting down someone to whom you owe nothing.


They seem to be banning accounts that are clearly not terrorists.

Also, I get there are some black and white cases here. But there's plenty of grey also. I imagine a lot of terrorist videos aren't posted by terrorists, but rather by some disenfranchised/angry person who hasn't done anything illegal.


Agreed. Intent is important.

Unfortunately, google clearly has stated they think terroristic acts are happening. Once you indicate that you believe a counterparty is engaging in terrorist acts, you mustn't support them. Quite a few people have gone for prison for a very long time because a fed or informant who had zero interest in actually carrying out a terrorist act, made someone believe they were. And all the prosecutor needs to show is that Google _thought_ they were aiding someone making terroristic acts, and the prison sentences could be lengthy.

Remember the legal system doesn't have much room for nuance when it comes to criminal conspiracy. Even if somebody else gives you all the tools to do something horrible and eggs you on to do it, you are responsible if you make any act whatsoever towards what you believe will have that end, even if the person egging you on is actually a fed who knows all along the act isn't meant to ever happen and in fact you were only presented with the idea that an act could happen on false pretense so you could be tricked into going to jail.

And this brings me back to my original thought, which is that google is pretty dumb for even putting themselves in the position where they have a very dumb system for deciding if they're dealing with terrorist material. Because it opens them up to huge liability that they wouldn't have had if they simply allow their users to exist in privacy.


If Google thinks you do something illegal then they should call the police after a real human checked. If not they should not ruin your life, who knows tomorrow all your Google accounts get locked forever because an Artificial (Non)Intelligence found a word or a part of an image that looks like something "bad" but not illegal.


It's estimated 2 billion people use google. And in the US for instance, one third of the people here have records for criminal activity. Google doesn't have the manpower to call the police on millions of criminals actively using their services.


So what is the solution? Google (and others) are clearly not competent to implement a good algorithm, so why the hell not give the unfortunate users a simple way to demand a human to check that your image or words are not criminal?

Sony did this to me, they blocked my son account for 2 months,no reason why or a way to appeal. The good part is that I now have a reason to not buy Sony products again, the bad part is that until it happens to you , you will think that there are millions of users and only few are affected by this so for sure it will not happen to you.


I've already stated the solution, it is the parent to which we are all responding:

>What we really should be asking is why is google examining user data at all. They should not be in the position where they can even find out who is a terrorist.

The solution I advocate for is they shouldn't be looking at our data. I don't want them evaluating who is a terrorist or who isn't, because as soon as they do that they need to take action on that information or be liable for failing to do so.

But I think you have found a very good solution with Sony, and maybe we should apply this to google and simply not use their service rather than get upset because they've chosen not to support those they think are terrorists. Again google is not your slave that must perform a service for you.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: