Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So.. how well does "human review" work with copyright on youtube?

This is basically fearmongering, and saying "if you're not a pedo, you have nothing to fear", installing the tech on all phones, and then using that tech to find the next wikileaks leaker (who was the first person with this photo), trump supporters (just add the trump-beats-cnn-gif to the hashes), anti-china protesters (winnie the pooh photos), etc.

This is basically like forcing everyone to "voluntarily" record inside of their houses, AI on that camera would then recognise drugs, and only those recordings will be sent to the police.



I was pointing out an inaccuracy in the article, not commenting about whether or not this tech is a good idea. I think if we're opposed to it, we should avoid misrepresenting it in arguments.

On which note... it really does seem to be voluntary, as there's an easy opt-out of the process in the form of "not using iCloud Photos". Or Google Photos, or assorted other services, which apparently all do similar scanning.

Yes, there's a slippery-slope argument here, but the actual service-as-it-exists-today really does seem scoped to a cautious examination of possible child porn that's being uploaded to Apple's servers.


> just add the trump-beats-cnn-gif to the hashes

They could literally do this right now server-side and nobody would ever know.


But noone uploads that to iCloud. That's why they must implement this feature client-side ("because if you're not a pedo, you have nothing to worry about"), and then enable it for on-phone-only photos too ("because if you're not a pedo, you have nothing to worry about"). Then use the same feature on OSX ("because if you're not a pedo, you have nothing to worry about").




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: