Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks, I think I see what you mean. Essentially another organisation has used file hashes to scan for extremist material, by their definition of extreme.

I agree that has potential for abuse but it doesn't seem to explain what the actual link is to NCMEC. It just says "One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed..." but doesn't name this "technology". Is it talking about PhotoDNA?



I'm not the one you reply to, but I have some relevant thoughts on the process. I'm value my fundamental right to privacy. I understand that technology can be used to stop bad things from happening if we accept a little less privacy.

I am okay with giving up some privacy under certain conditions, that is directly related to democracy. In essence a database and tech of any kind that systematically violates privacy we need the power distributed and "fair" trials. I.e. legislative branch, executive branch and judiciary branch.

* Only the legislative branch, the politicians, should be able to enable violation of privacy, i.e. by law. Then and only then can companies be allowed to do it.

* The executive have oversight of the process and tech, they would in essence be responsible to saying go/no-go to a specific implementation. They would also be responsible to perform a yearly review. All according to the law. This also includes the police.

* The judiciary branch, the justice system, is responsible for looking at "positive" hits and grant the executive branch case by case powers to use the information.

If we miss any of those three, I am not okay with systematically violation of privacy.


Here in France, law enforcement & probably some intelligence agencies used to monitor P2P networks for child pornography & content terrorists like to share with one another.

Now we have the expensive and pretty much useless "HADOPI" paying a private company to do the same for copyright infringement.

Ironically enough, it seems the interior and defence ministries cried out when our lawmakers decided to expand it to copyright infringement on the request of copyright holders. They were afraid some geeks, either out of principle or simply to keep torrenting movies, would democratise already existing means to hide one's self online, and create new ones.

Today, everyone knows to look for a VPN or a seedbox. Some even accept payments in crypto or gift cards.

¯\_(ツ)_/¯


Thanks for pointing that out, I have updated my comment to provide the links from the quote, which were hyperlinks in the EFF post.

> it doesn't seem to explain what the actual link is to NCMEC

The problem I see is the focus that is put onto the CSAM database. I quote from Apples FAQ on the topic [1]:

Apple only learns about accounts that are storing collections of known CSAM images, and only the images that match to known CSAM

and

Our process is designed to prevent that from happening. CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations.

Which is already dishonest in my opinion. Here lie the main problems and my reasons to find the implementation highly problematic derived from how I personally understand things:

- Nothing would actually prevent Apple from adding different database sources. The only thing the "it's only CSAM" part hinges on is Apple choosing to only use image hashes provided by NCMEC. It's not a system built "specifically for CSAM images provided by NCMEC". It's ultimately a system to scan for arbitrary image hashes, and Apple chooses to limit those to one specific source with the promise to keep the usage limited to that.

- The second large attack vector comes from outside, what if countries decide to fuse their official CSAM databases with additional uses? Let's say Apple does actually mean it and they uphold their promise: There isn't anything Apple can do to guarantee that the scope stays limited to child abuse material since they don't have control over the sources. I find it hard to believe that certain figures are not already rubbing their hands about this in a "just think about all the possibilites" kind of way.

In short: The limited scope only rests on two promises: That Apple won't expand it and that the source won't be merged with other topics (like terrorism) in the future.

The red flag for me here is that Apple acts as if there was some technological factor that ties this system only to CSAM material.

Oh and of course the fact that the fine people at Apple think (or at least agree) that Electronic Frontier Foundation, the Center for Democracy and Technology, the Open Privacy Research Center, Johns Hopkins, Harvard's Cyberlaw Clinic and more are "the screeching voices of the minority". Doesn't quite inspire confidence.

[1]: https://www.apple.com/child-safety/pdf/Expanded_Protections_...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: