"but unless we can have truly objective AI, I don't think we will be absent of these types of problems."
You don't need "objective AI." In this case, you just need a process that goes to a human when it is contested. Sure, make the user pay a certain amount of money to get an investigation by a third party or to have all the information provided to a court. But if Google made a mistake, and that mistake caused tort, Google is going to have to pay out.
The appropriate thing to happen here is something along these lines:
Google's AI tells the person their account is cancelled, but they can contest it for $100. The person contests it, and a human is put on the case and investigates. That person determines that Google made a mistake, and Google will restore the account quickly, and pay the person $200 for the hassle.
I meant to say specifically that until we can define things from an objective and not subjective stance, AI will not fare better.
That's an interesting proposition on the solution. I feel I see some potentially undesirable effects similar to the legal system but perhaps better than what exists currently.
While I like the spirit of this idea, wouldn't the reviewer be incentivized to say "no" to as many claims as possible? This is akin to how health insurance reviews work and, at least in the usa, that never goes poorly...
No, it should not be Google doing the review, it has to be a non-biased third party. This should apply to a lot of things, including app store stuff. Google doesn't make the decision, but they do get a share of the fee (if they don't have to pay out), since it is a hassle for them and they should be protected against frivolous complaints.
And by the way, while I'm sure Google should have to restore the account in this case, I'm not so sure Google should have to compensate even for the investigation fee. Because honestly it is a tad clueless to take photos of a kid's privates and allow them in the cloud. He didn't do anything truly "wrong" (in the molestation/ child porn sense), but still should have known it wasn't a good idea, so paying $100 for his mistake and being without his account for a week sounds about the right "punishment."
I think the reality is it technically hits this legal grey area. The whole, dating for years and one kid turns 17 while the other is 16 (or whatever the boundary case may be)...this letter vs spirit of the law argument.
It is by definition distribution of this material and to have software edge-case exceptions allowing certain situations through is something I can't imagine anyone willing to sign their name to endorse.
It seems from a heartless management perspective that the simplest decision is to walk away from the whole situation, wash their hands of it, and accept this as collateral damage.
You don't need "objective AI." In this case, you just need a process that goes to a human when it is contested. Sure, make the user pay a certain amount of money to get an investigation by a third party or to have all the information provided to a court. But if Google made a mistake, and that mistake caused tort, Google is going to have to pay out.
The appropriate thing to happen here is something along these lines:
Google's AI tells the person their account is cancelled, but they can contest it for $100. The person contests it, and a human is put on the case and investigates. That person determines that Google made a mistake, and Google will restore the account quickly, and pay the person $200 for the hassle.