The title is incorrect (as the article itself notes) as neither Gmail nor email contents were the trigger; rather, the situation unfolded as a result of Google Photos automatically uploading a pediatric medical photo, flagging it as CSAM, and setting in motion a law-enforcement reporting process.
I think this represents two issues. One: AI is not AI but a litany of conditionals that poorly reflects what we as humans can determine fairly quickly. Unless the process is unquestionably objective and possibly quantitative in its outcome, I just don't feel full automation will be without these events.
Google as a business automates everything. They are over their ski-tips in the amount of "getting it perfect" that is possible and are completely accepting of "close enough". I don't think this thought process is unique in the corporate world and unless revenue is impacted heavily, they will not be incentivized to incur the heavy cost of making the end-user whole. The majority of revenue is ad based meaning B2B. Users are a bucket of data, not the person they are selling to.
It's like complaining that a cattle farmer doesn't treat his cows kind enough...the farmer would think you're nuts despite the masses possibly agreeing.
As it relates to this specific type of content, I'd rather see something bad happen to good people than something good happen to bad. Maybe it's unpopular, and I'd hate to be the person on the receiving end, but unless we can have truly objective AI, I don't think we will be absent of these types of problems. In the meantime there has to be someone in the background fixing these cases and Google has shown clearly with YouTube...unless you are making enough noise they aren't listening.
"but unless we can have truly objective AI, I don't think we will be absent of these types of problems."
You don't need "objective AI." In this case, you just need a process that goes to a human when it is contested. Sure, make the user pay a certain amount of money to get an investigation by a third party or to have all the information provided to a court. But if Google made a mistake, and that mistake caused tort, Google is going to have to pay out.
The appropriate thing to happen here is something along these lines:
Google's AI tells the person their account is cancelled, but they can contest it for $100. The person contests it, and a human is put on the case and investigates. That person determines that Google made a mistake, and Google will restore the account quickly, and pay the person $200 for the hassle.
I meant to say specifically that until we can define things from an objective and not subjective stance, AI will not fare better.
That's an interesting proposition on the solution. I feel I see some potentially undesirable effects similar to the legal system but perhaps better than what exists currently.
While I like the spirit of this idea, wouldn't the reviewer be incentivized to say "no" to as many claims as possible? This is akin to how health insurance reviews work and, at least in the usa, that never goes poorly...
No, it should not be Google doing the review, it has to be a non-biased third party. This should apply to a lot of things, including app store stuff. Google doesn't make the decision, but they do get a share of the fee (if they don't have to pay out), since it is a hassle for them and they should be protected against frivolous complaints.
And by the way, while I'm sure Google should have to restore the account in this case, I'm not so sure Google should have to compensate even for the investigation fee. Because honestly it is a tad clueless to take photos of a kid's privates and allow them in the cloud. He didn't do anything truly "wrong" (in the molestation/ child porn sense), but still should have known it wasn't a good idea, so paying $100 for his mistake and being without his account for a week sounds about the right "punishment."
I think the reality is it technically hits this legal grey area. The whole, dating for years and one kid turns 17 while the other is 16 (or whatever the boundary case may be)...this letter vs spirit of the law argument.
It is by definition distribution of this material and to have software edge-case exceptions allowing certain situations through is something I can't imagine anyone willing to sign their name to endorse.
It seems from a heartless management perspective that the simplest decision is to walk away from the whole situation, wash their hands of it, and accept this as collateral damage.
The title is incorrect (as the article itself notes) as neither Gmail nor email contents were the trigger; rather, the situation unfolded as a result of Google Photos automatically uploading a pediatric medical photo, flagging it as CSAM, and setting in motion a law-enforcement reporting process.