Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Would it be ethical to train a model to identify child porn so it could be excluded from training sets, or even the internet, automatically? It seems like an ideal very useful application for CV AI, but you might have to train it on CP to make it effective so…


I can see some practical difficulties with this. The first one is that CSAM is radioactive and can’t be touched, so only a very limited circle of people could even approach this task. Related to that, the model weights would contain representations of the training data, which if distributed could then be used to get some of the source material back out.


Hashes of existing child porn are created and disseminated for automated detection already, a CP detector model isn't too far from that.


Hashes just detect old unmodified content, not new stuff, or even AI generated stuff.


Wasn’t the hashing mechanism that Apple was proposing using capable of dealing with some attempts to obfuscate the CSAM image with edits?


It's also super easy to change the video slightly to alter the hash.


Obviously, but the idea of creating a useful hash function in this case is to make it invariant under those sorts of to a human trivial transformations, whilst avoiding collisions between genuinely different content. Microsoft created something called PhotoDNA that tries to do this.


[flagged]


>The major problem with CP is that the most cost-effective way of producing it is abusing children.

I'm not sure that's completely true. Despite the surprisingly widespread notion that generative AI can only strictly reproduce things it's been trained on, it can still deductively create novel things based on that data. For example, image generation models are trained on a wide variety of adult human figures. Children are just... kind of a subset of what makes an adult, so via some clever prompting, it should be possible to steer a model to generate CP. It should then be possible to generate synthetic data to better tag, fine-tune, train a new model, or whatever.

The problem with this is a person needs to steer that ship -- who's going to do it? Even if it has the potential to prevent abuse, it's something that's so universally unacceptable that people generally don't want to touch the topic. There are law enforcement officers whose job it is to manually search for and assess CP on the web, but given the underlying philosophy of law enforcement, I'm sure most of them would not approve of the idea of generating CP, even if it's synthetic. So... the ideal pick would be an actual (hopefully non-offending) pedophile, but who wants to be known as an employer of pedophiles?

There are supposedly communities of pedophiles who wish to stay non-offending, and I imagine it'll be an "outsider" group that tries this, if at all.


Furthering sexual the fantasies of pedophiles actually does not help children, but obviously creates predators as more people are exposed.


> ... but obviously creates predators as more people are exposed.

If we look at more traditional porn; the uptick in availability seems to have correlated with less sex. And the advent of violent video-games is associated with no uptick in violence as far as I have been told, although it is getting harder for armies to recruit.

I reckon the default position here is if you give people a dopamine hit when they are staring at a screen, they stare at a screen more. Not that they start harming others. The idea that we don't want marginal cases to wake up a sexual love for children makes sense to me, but I don't think more "predators" will be created; we'd probably just see a collapse in the economic incentives currently in place to harm children.


>the economic incentives currently in place to harm children

I really do not believe in that. There isn't a single person in the world who chooses to be a child pornographer because the economics of it are good. They do it because it allows them to carry out their sexual fantasies, which are about abusing children. The abuse has to be the point, if it weren't just abstaining from it would be the only choice.

It is extremely easy to comprehend that your sexual urges (which can be satisfied in a myriad ways) are monumentally less important than a childs wellbeing. Choosing your urges means that you are doing it because of the abuse, which of course any virtual recreation can not provide.


>obviously creates predators as more people are exposed. (from your earlier comment)

>There isn't a single person in the world

Is there any good research to back these up? Societies have had similar lines of reasoning involving prohibition and banning in the past that seemed "obviously" intuitively correct, some of which have been mentioned already by others.

As mentioned by someone else, CP is "radioactive," and as such, in a black market would probably fetch an appropriate price. It's been an extensively studied and observable behavior that people can, and will partake in unethical or immoral activities if the economic incentive is high enough. It's not an either-or thing; it can be about the abuse, but it can also be about money, or even just a lack of empathy. The "lesser" crime of CP distribution doesn't necessarily have to be about the abuse either; people can find themselves in crime rings out of desperation, or coercion.

If realistic AI-gen CP can greatly devalue the real stuff, the risk would become much less worth it, except for the types of people you specifically mention. As I mentioned in another comment, it's possible to train and generate victimless synthetic CP -- AI learns concepts, similar to how a human artist might piece together concepts to create something they've never seen before.

The idea that virtual content can't provide some sort of catharsis or outlet for at least some people is also questionable, unless we can get a significant number of actual pedophiles to come and testify to this. People already find outlets in acting out any number of emotional, sex, or abuse fantasies via text LLMs, despite it being only text. And generative AI is becoming increasingly realistic, including audio and video domains, even if it's disturbing to think about.

There is evidence to support that pedophilia is to some extent a result of biology and uncontrollable life circumstance, and there are non-offending pedophile support groups, so avoiding abuse crimes is a choice for a significant non-zero portion of pedophiles.


That's not obvious to me at all. What if a trove of pornography provides a non-violent alternative, meaning fewer children get hurt? Either hypothesis seems at least somewhat likely, and boldly claiming that one or the other is true should be supported by at least some evidence.

But even with AI generated porn, children get hurt indirectly imo, for the same reason using AI to generate art violates the property rights of artists.


That's the same logic as 'video games cause violence'. Do video games cause violence?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: