Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the reasons we cant have nice things.

CSAM spam filtering is a bit of a moat for larger companies able to manage the costs of moderating it.

I would like to see AI moderating of CSAM, perhaps an open weights model that is provided by a nation state. With confidential computing such models can be run on local hardware as a pre-filtering step without undermining anonymity.



> I would like to see AI moderating of CSAM, perhaps an open weights model that is provided by a nation state.

I don't envy the people who would have to trauma their way through creating such dataset. Yet, it would be useful yes.

> With confidential computing such models can be run on local hardware as a pre-filtering step without undermining anonymity.

I'm not sure it'd make sense to run locally. Many clients aren't powerful enough to run it on the receiving end (+ every client would need to run it, instead of fewer entities), and for obvious reasons it doesn't make sense to run on the senders end.


I guess I meant locally to the server not the client (edge). But also perhaps a very light model could be run on the edge.

I built a porn detection filtering algorithm back in the Random Forest days, it worked well except for the French and their overly flexible definition of 'art'. The 'hot-dog/not-hot-dog' from SV HBO is pretty accurate on what that was like. I've thought about what it would take to make a CSAM filter and if it could be trained entirely within a trusted enclave without external access to the underlying data and I do believe it is possible.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: