Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, this is my reading of the situation too. From a technical point of view, what Apple is doing seems like a reasonable approach to me; what I absolutely do not trust is any claim that it will never ever be expanded to include, say, governments requiring similar tech be applied to all messages before they are encrypted in order to look for dissidents… and then generally at opposition parties. (And of course there are enough competent technologists available for making custom encrypted chat apps that the hostile governments could only achieve their goals by putting the AI in the frame buffer, which opens the door to subliminal attacks against political enemies — targeting the AI with single frames such that the device user doesn’t even notice anything).

But I know I have paradoxical values here:

On the one hand, I regard abusive porn as evil. I want it stopped.

On the other, I have witnessed formerly-legal acts become illegal, and grew up in a place where there are sexual acts you can legally perform yet not legally possess images of. I think the governments (and people in general) follow a sense of disgust-minimisation, even when this is at the expense of harm-minimisation.

And any AI which can genuinely detect a category of image can also be used to generate novel examples that are also in that category without directly harming a real human.

I don’t have a strong grasp on what I expect the future to look like. My idea of what “the singularity” is, is that rather than a single point in the future where all tech gets invented at once, it’s an event horizon beyond which we can no longer make reasonably accurate predictions. I think this horizon is currently 2025-2035, and that tech is changing the landscape of what morality looks like so fast that I can’t see past that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: