Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would say color discrimination is easier because it's basically 1-dimensional. In the best case you could do it with a resolution of a single photoreceptor. If you account for ambient lighting, shadows, etc you still just have a 2-dimensional task.

Shape discrimination is a 3-dimensional task. In the worst case you need to change your point-of-view or the orientation of the object.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: