A paper came out awhile ago showing that neural networks are extremely vulnerable to adversarial examples [1]. They showed even slight perturbations of an image generated with their method could cause NNs to misclassify it, but appear no different at all to a human. I am interested if methods like this could be used to extend the life of CAPTCHA a bit longer, even as computers are starting to beat even humans at object recognition tasks.
I think you have made a good point and possible solution here. But I am keen to see if researchers can quickly address this issue with NNs. Someone might find an easy fix to this problem.
http://cs.nyu.edu/~zaremba/docs/understanding.pdf