"when someone tries to identify you using an unaltered image of you [...] they will fail."
I wonder how this holds up when someone takes a photo of that 'protected image'. I can imagine that if these miniscule pixel-scaled changes aren't visible to the naked eye, my crappy 6 megapixel camera will overlook it as well. If I then proceed to feed that image into my image recognition algorithm, is it still protected?
They go over the effects of compression - which they say only degrades the protection - but at the same time also degrades the identification accuracy of the AI model.
So if your crappy 6 megapixel camera cannot take a clear shot of the cloaked pixels - or effectively applying a blur filter - would also affect the AI detection.
More importantly, assuming they have a database of such cloaked images, what if someone just applies the same cloaking technique to the image of you? Can they still identify you?
That's making a pretty lazy assumption that even a quick read of the original article leads me to be sure it's incorrect.
There's quite a lot of comments here that stink of Dunning Kruger candidates, who read the headline and first paragraph, then just started typing their random "wisdom" assuming they're smarter and better informed that the team of PHD researchers who wrote the paper being discussed. (Am I just overly grumpy and judgemental today? Was HN always this bad?)
I wonder how this holds up when someone takes a photo of that 'protected image'. I can imagine that if these miniscule pixel-scaled changes aren't visible to the naked eye, my crappy 6 megapixel camera will overlook it as well. If I then proceed to feed that image into my image recognition algorithm, is it still protected?