Hacker Newsnew | past | comments | ask | show | jobs | submit | blindmonk3000's commentslogin

Here is a good video describing how this might work. Near the end he shows that even printing out an image that has been "cloaked" and viewing it from different angles can still fool a neural network classifier.

https://www.youtube.com/watch?v=4rFOkpI0Lcg


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: