Hacker News new | past | comments | ask | show | jobs | submit login

There are two ways to capture an EM signal. The static-array method, where you have something like a CCD (a matrix of antennas that sample a signal in parallel), is basically only useful around the visible-light part of the EM spectrum.

What you do for the rest of the spectrum is to take one antenna and move it around. As long as the signal is relatively static with respect to time, this has the same effect (and is much cheaper to implement.)




> The static-array method, where you have something like a CCD...is basically only useful around the visible-light part of the EM spectrum

Why is this the case? Is it just that technology is further along for visible light because there's more economic incentive for a digital camera that replicates the human eye?

Is it a materials problem, where we haven't discovered arrangements of matter with the right properties (e.g. CCD's respond to visible wavelengths and are adaptable to semiconductor manufacturing techniques)?

Or is it something to do with fundamental physics like the wavelengths are a lot longer which requires detectors that are too large to be practical? Or maybe diffraction is a problem?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: