Does the law have a moderation carve out? There are plenty of laws that have what's called 'strict liability' where your intent doesn't matter.
I'm not suggesting that this is absolutely positively a situation where strict liability exists and that moderation isn't allowed. But the idea that "hey we're trying to do the right thing here" will be honored in court is....not obvious.
If we investigated this author if they do in fact run a photo service we would inevitably find that unless they are incompetent they have to moderate content, either blind or based on flags.
So if apple is going to jail for child porn because they moderate / report content after flagging (this is normally actually required to do - report it), then this article writer should be going to jail as well - I guarantee his services stores, forwards and otherwise handles CASM content.
My complaint is just - HN used to focus on stuff where folks didn't just always jump to worst case arguments (ie, apple is guilty of child porn and is committing felonies) without at least allowing that apple MAY have given this a tiny bit of thought.
It's just tiresome to wade through. It's a mashup of they are blocking too much, are the evil govt henchperson to they are breaking the law and going to jail for felony child porn charges.
I get that it generates interaction (here I am), but it's annoying after a while. Clickbait sells though no question so things like "One Bad Apple" are probably going to keep on coming at us.
Well there is a difference as stated on the article that they don't expect to see CP or CSAM and says "We are not 'knowingly' seeing it since it makes up less than 0.06% of the uploads ... We do not intentionally look for CP."
Whereas Apple is moderating the suspected images so they intentionally look for CP (which, according to the author and his lawyer, is a crime).
This is such a pathetic interpretation. All flagging systems (which is how moderation work - facebook does not manually review every photo posted) alert the company that there may be a problem. Moderators do their thing. They EXPECT to see bad content based on these flags. Smaller places may act on complaints.
The idea that this makes them guilty of felony child porn charges is so ridiculous and offensive.
Facebook (with Insta) alone is dealing with 20 million photos a year -
How about we ask the actual folks involved in this (NCMEC) what they think about apples "felonies". Maybe they have some experts?
Oh wait, the folks actually dealing with this, the people who have to handle all this crap - are writing letters THANKING apple for helping reduce the spread of this crap.
So - we have a big company like Apple (with a ton of folks looking at this sort of thing). We have the National Center for Missing and Exploited Children looking at this. And we are being told - by some guy who will not even name the attorney and law firm reaching this opinion, that apple is committing child porn felonies.
Does no one see how making these types of horribly supported explosive claims just trashes discourse? Apple are child pornographers! So and so is horrible for X.
Can folks dial it back a TINY bit - or is the outrage factory the only thing running these days?
Yeah, as usual I'm worried the people who claim that others don't read are the ones not reading (or being able to comprehend) what the author is trying to say. To me it seems like moderation in general is fine. What Apple is doing here is that after they receive a flag that a certain threshold is crossed, they manually review the material. The author states that no one should do that i.e., the law explicitly prohibits anyone even trying to verify. If you suspect CP, you got to forward it to NCMEC and be done with that.
I 100% understand why Apple doesn't want to do that - automatic forwarding - they're clearly worried about false positives. I also think Apple has competent lawyers. It's entirely possible that the author and their lawyers' interpretation could be wrong (a possibility).
Point is - the author isn't trying to say moderation is illegal.
The whole thing rests on whether Apple knows that the content is CSAM or not. And they don’t. The author gets this fundamentally wrong. They do not know whether it is a match or not when the voucher is created. The process does, but they don’t. They know when the system detects a threshold number of matches in the account, and they can then verify the matches.
Additionally, we already know they consulted with NCMEC on this because of the internal memos that leaked the other day, both from Apple leadership and a letter NCMEC sent congratulating them on their new system. If you think they haven’t evaluated the legality of what they’re doing, you’re just wrong.
I'm not suggesting that this is absolutely positively a situation where strict liability exists and that moderation isn't allowed. But the idea that "hey we're trying to do the right thing here" will be honored in court is....not obvious.