Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact.

What you are saying here kind of sidetracks the actual problem those critics have though, doesn't it? The problem is not the acknowledgement of how bad child abuse is. The problem is whether we can trust the people who claim that everything they do, they do it for the children. The problem is damaged trust.

And I think your last paragraph illustrates why child abuse is such an effective excuse, if someone criticizes your plan, just avoid and go into how bad child abuse really is.

I'm not accusing you of anything here by the way, I like the article and your insight. I just see a huge danger in the critics and those who actually want to help with this problem being pitted against each other by people who see the topic of child abuse as nothing but a convenient carrier for their goals, the ones that have actually heavily damaged a lot of trust.



What are the main instances where these claims were made fraudulently?


One example I quote from EFF's post Apple's Plan to "Think Different" About Encryption Opens a Backdoor to Your Private Life [1] on the topic:

We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content [2] that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT) [3], is troublingly without external oversight, despite calls from civil society [4]. While it’s therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content [5] as “terrorism,” including documentation of violence and repression, counterspeech, art, and satire.

[1]: https://www.eff.org/deeplinks/2021/08/apples-plan-think-diff...

[2]: https://www.eff.org/deeplinks/2020/08/one-database-rule-them...

[3]: https://gifct.org/

[4]: https://cdt.org/insights/human-rights-ngos-in-coalition-lett...

[5]: https://www.eff.org/wp/caught-net-impact-extremist-speech-re...


Thanks, I think I see what you mean. Essentially another organisation has used file hashes to scan for extremist material, by their definition of extreme.

I agree that has potential for abuse but it doesn't seem to explain what the actual link is to NCMEC. It just says "One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed..." but doesn't name this "technology". Is it talking about PhotoDNA?


I'm not the one you reply to, but I have some relevant thoughts on the process. I'm value my fundamental right to privacy. I understand that technology can be used to stop bad things from happening if we accept a little less privacy.

I am okay with giving up some privacy under certain conditions, that is directly related to democracy. In essence a database and tech of any kind that systematically violates privacy we need the power distributed and "fair" trials. I.e. legislative branch, executive branch and judiciary branch.

* Only the legislative branch, the politicians, should be able to enable violation of privacy, i.e. by law. Then and only then can companies be allowed to do it.

* The executive have oversight of the process and tech, they would in essence be responsible to saying go/no-go to a specific implementation. They would also be responsible to perform a yearly review. All according to the law. This also includes the police.

* The judiciary branch, the justice system, is responsible for looking at "positive" hits and grant the executive branch case by case powers to use the information.

If we miss any of those three, I am not okay with systematically violation of privacy.


Here in France, law enforcement & probably some intelligence agencies used to monitor P2P networks for child pornography & content terrorists like to share with one another.

Now we have the expensive and pretty much useless "HADOPI" paying a private company to do the same for copyright infringement.

Ironically enough, it seems the interior and defence ministries cried out when our lawmakers decided to expand it to copyright infringement on the request of copyright holders. They were afraid some geeks, either out of principle or simply to keep torrenting movies, would democratise already existing means to hide one's self online, and create new ones.

Today, everyone knows to look for a VPN or a seedbox. Some even accept payments in crypto or gift cards.

¯\_(ツ)_/¯


Thanks for pointing that out, I have updated my comment to provide the links from the quote, which were hyperlinks in the EFF post.

> it doesn't seem to explain what the actual link is to NCMEC

The problem I see is the focus that is put onto the CSAM database. I quote from Apples FAQ on the topic [1]:

Apple only learns about accounts that are storing collections of known CSAM images, and only the images that match to known CSAM

and

Our process is designed to prevent that from happening. CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations.

Which is already dishonest in my opinion. Here lie the main problems and my reasons to find the implementation highly problematic derived from how I personally understand things:

- Nothing would actually prevent Apple from adding different database sources. The only thing the "it's only CSAM" part hinges on is Apple choosing to only use image hashes provided by NCMEC. It's not a system built "specifically for CSAM images provided by NCMEC". It's ultimately a system to scan for arbitrary image hashes, and Apple chooses to limit those to one specific source with the promise to keep the usage limited to that.

- The second large attack vector comes from outside, what if countries decide to fuse their official CSAM databases with additional uses? Let's say Apple does actually mean it and they uphold their promise: There isn't anything Apple can do to guarantee that the scope stays limited to child abuse material since they don't have control over the sources. I find it hard to believe that certain figures are not already rubbing their hands about this in a "just think about all the possibilites" kind of way.

In short: The limited scope only rests on two promises: That Apple won't expand it and that the source won't be merged with other topics (like terrorism) in the future.

The red flag for me here is that Apple acts as if there was some technological factor that ties this system only to CSAM material.

Oh and of course the fact that the fine people at Apple think (or at least agree) that Electronic Frontier Foundation, the Center for Democracy and Technology, the Open Privacy Research Center, Johns Hopkins, Harvard's Cyberlaw Clinic and more are "the screeching voices of the minority". Doesn't quite inspire confidence.

[1]: https://www.apple.com/child-safety/pdf/Expanded_Protections_...



Snoopers Charter in the UK springs to mind.


The snoopers charter was always fairly broad though. It was never about child abuse, it covers collecting phone records etc which is used in most police investigations. Unless you are referring to some particular aspect of it I'm not aware of?

I understand very broad stuff might be sold as "helping to fight child abuse" while avoiding talking about the full scope, but that isn't what Apple/NCMEC are doing here. They are making quite explicit claims about the scope.

That's why I'm wondering if there is precedent for claims about "we will ONLY use this for CSAM" which were later backtracked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: