Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see a possible solution that may work in a short term:

Instead of displaying a static captcha, display a dynamic one, with letters going through elastic transformations. Humans are pretty good with video sequences. Computers are not.

As a side effect this may help pushing computer vision algorithms to working with videos, rather than static images :)



Do OCR on every frame, perform majority voting on the result. You just made the spammer's task easier.


Here is an example of a problem that would be hard for computer to solve:

http://www.youtube.com/watch?v=4G4y79ZbaBs


How so? the fixed part can be easily extracted. If it also moved (while morphing) then I guess it would be hard, but fixed dots in a moving background would take just a few frames for a computer to solve.


You just diff each frame and keep the parts that don't change much. Very simple to solve.


Just move letters slightly. And make them morph a bit. Would still be obvious to a human, but a computer trying to average anything would fail miserably.

This is an unsolved computer vision problem.


which pixels stay black in all frames?


Captchas are hard because there's only so much an algorithm can extract from spatial information. Computers are excellent with temporal data, given essentially unlimited memory for past video frames. Computers benefit more from increased information than humans do.


This is computer vision, an I'm somewhat an expert in the area. I can tell that video sequence recognition is _much_ harder problem than image recognition.

For example, if you would show an letter made out of random noise moving through random noise, current computer vision algorithms would not be able to recognize anything. And you would pick out that letter immediately. Human visual subsystem is really amazing in that sense.


It should be possible to do this with an animated GIF. Do you have any references/examples I could use as a starting point?


Oh. I remember reading some vision paper and in the supplement materials there've been a couple of videos with letters moving. Doubt, I'll be able to find it that easily.

Should be relatively easy to code with any library that can draw a text on a bitmap. Like PIL, matplotlib, etc. Use ffmpeg to make a video out of frames.

1. draw letters (just black/white) masks; 2. fill letters with noise; 4. fill background with noise; 5. copy letters using a mask onto background, using X,Y as loc; 6. add a little bit of new noise to letters; 8. modify X,Y coordinates (move letters SLIGHTLY); 9. go to step 4.



This is simple, brillant. The best solution I have seen ever.

Do you have a patent already? :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: