Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

never thought about using csam image hash alerts as a measure of platform data leaks (and popularity as i doubt bots will be sharing them). that's very smart.

and show that fb eclipse everyone by a insane margin it's scary!

about your point on business accounts, the documents i reviewed included dialog tree bots managed by meta. not sure if not having that change things... but in that case it was spelled out that meta is the recipient



Its more a UX/org thing. In iMessage how do you report a problematic message? you can't easily do it.

In whatsapp, the report button is on the same menu that you use to reply/hide/pin/react.

Once you do that, it sends the offending message to meta, unencrypted. To me, that seems like a reasonable choice. Even if you have "proper" e2ee, it would still allow rooting out of nasty/illegal shit. those reports are from real people, rather than automated CSAM hashing on encrpyted messages. (although I suspect there is some tracking before and after.)

Its the same with instagram/facebook. The report button is right there. I don't agree with FB on many things, but this one I think they've made the right choice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: