I'm not sure your concerns are completely valid here. We've reached the point where consumer phone cameras are good enough for 90% of photography. At this point there's huge value in giving better access to the platform itself to start applying machine learning techniques directly on the camera.
In addition, the FPGA you call limited I'd argue gives access to powerful reprogrammable logic. Why hard-code image processing algorithms when you can update them as new techniques come out?
Not only is this an amazing sensor (180fps, global shutter) but it's using cutting edge technology. I'd argue that it's possible sensor technology is reaching some of its limits and an industrial sensor here can match the quality of DSLR sensors. It may only be 12MP but Google and Apple have both shown that intelligent algorithms can produce amazing results from sensors that size.
>> At this point there's huge value in giving better access to the platform itself to start applying machine learning techniques directly on the camera.
What does machine learning have to do with photography? If there's an explanation for that, then I'd add why does it need to happen on the camera?
I'd agree with parent that ML is one of the killer apps of more open professional camera hardware.
Nowhere I can think of (other than Android, but a lot of caveat there) has an open hardware upstart challenged entrenched commercial players for legacy needs and won.
Where you do win is by targeting emerging needs, especially ones legacy commercial players are ill equipped to take advantage of (due to institutional inertia or ugly tech stacks).
As for why ML on edge devices: a large amount of work is going into running models (or first passes) on edge devices with limited resources (see articles on HN). I would assume they have business reasons.
But offhand, almost everything about vision-based ML in the real world gets better if you can decrease latency and remove high-bandwidth internet connectivity as a requirement.
It relies on the fact that nobody takes really original pictures, hence you can build a large enough data set and apply learned techniques/attributes/behavior to "new" pictures. Computer vision = small standard deviation.
I think the point is that the low-level image sensor functions such as timing, readout, buffering, error correction etc don't change much. Therefore its more cost effective to do it in an ASIC rather than pay a premium for FPGA.
In addition, the FPGA you call limited I'd argue gives access to powerful reprogrammable logic. Why hard-code image processing algorithms when you can update them as new techniques come out?
Not only is this an amazing sensor (180fps, global shutter) but it's using cutting edge technology. I'd argue that it's possible sensor technology is reaching some of its limits and an industrial sensor here can match the quality of DSLR sensors. It may only be 12MP but Google and Apple have both shown that intelligent algorithms can produce amazing results from sensors that size.
The video on the properties of the sensor itself is quite impressive: http://vimeo.com/17230822