Edit: Below is incorrect. They claim the total area is enough to capture 40 full moons, not that the sensor is 40 full moons across. Assuming a perfectly round sensor you could estimate its radius from that, but that's too much on my phone.
Maybe they "left it as an exercise for the reader"? ;-)
The FOV is "40 full moons", so 40•0.5° = 20°. For a plane 15 miles away that comes to an image covering 2•15•tan(10°) miles = 5.3 miles.
So you can spot a golfball on an image that's 5.3 miles large.
On one of their picture, it shows 7 moons wide, which is probably a good approximation (6 lines of those 7 moon images does seem close to filling the area of the sensor in their pictured representation, if you then shift some moons when the sensor lines get shorter at the top and bottom).
So using your calculation it could be, approximately: 7*0.5° = 3.5° in FOV.
Assuming the sensor is 7 full moons across as indicated in their figure, the FOV is only 3.5°. In this case the plane at 15 miles distance is only 0.9 miles across.
The article has the answer to your question: There was no lens, just a pinhole. The lens and filter system gets integrated "over the next few months". They tested the sensor array, not the lens.
Ha, that's how I tested a DSLR that I had repaired. There was no matching objective there, so I wrapped aluminium foil over the front and, using a needle, stabbed a tiny hole into it. The pinhole setup was good enough to test what I needed to test :D
As others have stated, there’s no lens for focusing.
What is however kind of surprising to me is the amount of hot pixels and also dust / debris on the sensors. You’d think that as expensive and fragile as they claim the sensor array is, they’d keep the environment cleaner. Example: https://www.dropbox.com/s/rhldlwyqzuvbf2o/2020-09-13%2007.36...
I also question the “five human hairs” wide gap between the sensor pods given that the gaps are clearly visible in the press photos of the sensor array.
None of the debris is on the sensors. There is some small amount of dust on a window, and it shows up twice as much due due to image processing performing bias subtraction.
The sensors are in a cryostat, and the effective size you of what you might thing as “dust” in these pictures would be quite large if they had been on the sensors (several mm)
If you zoom into the picture (in the upper middle parts) there are a lot of transparent circles/torus shapes. I see exact same things (but didn't pay much attention to it) if I look into a bright light source and squint (especially if eyes are moist from tears). These circles also seem to move. What is it?
If they move, they are probably floaters (little cells or bits of debris floating around in your vitreous humor). In the image, they appear to be diffraction artifacts (airy function) from the pinhole aperture
A flat image was taken for bias subtraction and the debris shows up twice because the projector had moved slightly between images, so the image processing ended up subtracting where it didn’t need to and missing the original artifact.
It is an array, not really a single camera. Note that there will be dead space, black lines between sensors, that wouldnt be acceptable in any camera not meant for a telescope. Imho if we are going to talk about big cameras, the detector array is less important than the light gathering system of mirrors and lenses.
A digital sensor is an array of pixels. There are lines between pixels...assuming no AA filter, in which case there are lines between filter cells. The objection is a “No true Scotsman.” A camera consists of an aperture for collecting light and a medium for observing the collected light, e.g. the cameras obscura and lucida. If the collecting medium can record the collected light, it can produce photographs, either still or moving.
It’s worth noting that astronomical pictures often consist of multiple overlapping photographs. And I guess it’s plausible to describe a digital sensor as capturing many one pixel photographs.
Microlens arrays have been standard for years and increase the fill factor to ~100 %, so the argument is kinda bogus. In any case, the gaps between adjacent sensors in a "raft" are much bigger than the pixel pitch, and the gaps between rafts again larger, so the argument is kinda moot either way.
It ia a single camera with multiple CCD detectors on a common focal plane behind a shutter. It is not a single CCD on a common focal plane. Should there be such a 3.2GP sensor, the noise would be abysmal, silicon would be huge, pixels tiny, and it wouldn’t be all that interesting for astronomy. Note that each 16MP sensor is physically quite large as well.
> If there were intelligent like out there advertising it's presence, are we likely to detect it given the sheer quantity of sources?
Humans are pretty intelligent. Intelligent enough to ask that question, anyway.
So flip it around. If humans were to advertise our own presence to possible species living tens of billions of light years away, how would we go about doing it?
Sadly, even if they noticed and wrote back, our sun will have since burned out, our time in this universe long past.
I, also, have wondered this and came to the conclusion that you use whatever medium travels faster than light. Because until we discover that we're totally isolated anyway.
Doing the math with some quick Google searches...
A single red photon has the energy 2.810-19 J.
This sensor is for a telescope with the primary mirror size of 8.4 meters. So area ~ pi8.4m^2 = 221.7 m^2 (square meters)
The distance to the closest galaxy, Andromeda, is 2.537 million light years = 2.4×10^22 meters
If we assume it's a broadcast message (a poor guess for Andromeda, but reasonable for further galaxies), with energy projected in all directions, we can use the surface area of a sphere given a distance to get an idea for how much energy would need to be broadcasted to detect a single red photon: A=4πr^2
Wolfram alpha helpfully says that is "≈ ( 0.024 ≈ 1/42 ) × energy output of the sun in one second ( 1 s L_ )"
Which is better than I expected, honestly.
So, even for detecting life in the closest galaxy, an intelligent species would need to be working on the energy level of output of stars to have a hope of light reaching earth. Not even counting getting above the noise of the rest of the stars, and then sending something identifiable as intelligent.
Makes you wonder though... would it be possible to modulate the light from the star using a shutter-like mechanism in the direction you want to communicate?
Imagine something like a DLP chip, but super thin: a reflective foil with tiny baffles that can be opened and closed using something like a simple electrostatic system. Make these into an enormous light-sail like sheet the same radius as the star. Place it between the star and the target system, and you have something akin to a signal lamp using the star as the light source.
You don't need to generate power on the order of a star, you just need to modulate the light field of one. That's a much lower requirement.
Bit of back-of-the-envelope maths is that this is 4*10^12 kg, assuming a sail the radius of our sun, 1 nm thick, and made of aluminium.
That's a huge amount of mass, but conceptually manufacturable if using automated asteroid mining or somesuch.
I feel like I'm making a mistake in my calculations for the energy requirements to toggle the shutters, but it's certainly low enough that embedded solar cells can trivially provide the energy required even if they're tiny and far apart.
Maybe all these light dips we detect as planets transiting their stars are an elaborate "Hello!" in the form of retired Dyson spheres repurposed as shutters.
If one in 20 billion signals is transmitting a message in an encoding we don’t understand, on a time scale potentially completely different from our own.... I’m not confident.
If an intelligent life form exists out there and is advanced enough to ponder the meaning of life and if there is other life... and is intelligent enough to know how to send messages across the universe... and to encode them...
They would therefore be intelligent enough to know to/how to broadcast such a message in a way that can be universally understood.
Assuming it's possible both mathematically - in terms of an arbitrary encoding that we cannot explain away using standard physical phenomena e.g. quasars, and more importantly assuming it's actually possible from an engineering standpoint. All radio signals we send become indistinguishable from noise quite quickly.
I don't think we can realistically expect to detect anything beyond our galactic neighborhood- what could someone in another galaxy do to catch our attention? Trigger supernovas at an interval representing the Planck constant?
we are already trying to detect it. taking a photograph (or, recording a relatively narrow band of electromagnetic data) is not the best means of detection
I expect this project is more geared towards detecting the geometry of the universe e.g. star and galaxy formation and evolution
Maybe there is life with radically different physics from ours. And maybe they built some novelty galaxy constellations and our physicists will puzzle over those.
"We've had to update our model of how galaxies form to include engineering as the most parsimonious explanation for SLAC 79813 and its satellites 79828 and 79829. Light from SLAC 79833 indicates only class O stars whereas both satellites seem to contain only class M stars. Very pretty! But no natural process except precise engineering on a cosmic scale could lead to such homogeneity.
If our models are correct, tidal forces will stretch the two satellite galaxies into orthogonal tidal loops over the next billion years. It takes a species with significantly longer life-span than ours to appreciate the fireworks. But you can watch our simulation."
Such unlikely discoveries might very well start with a photo of the sky.
That’s not the same though. The sunglasses are in front of my eyes. If you put that mesh on my retina, I’m gonna see lines (if by some miracle my eyes still work)
uh. are they doing some kind of bizarre imaging with micro-lenses at the array and offsetting the focal plain? -- because otherwise it doesn't. Gaps are missing pixels.
Presumably for astronomical applications they don't matter.
Usually it's gotten around because astronomical images are stacks of many integrations, and then you dither around. If you take 10 5-minute exposures, for example, and you dither around correctly, you might have a bunch of pixels with 9/10 or 8/10 data values... then you stack the images and weight those pixels accordingly.
Is there a constellation pattern SpaceX could use that would leave open a clear sky above Chile? (Maybe at the expense of not offering service in the country).
Any orbit has to cross the equator, so anything that avoids Chile is also going to have to avoid quite a lot of other stuff. Only a geosynchronous orbit would make any sense there and then it'd be too high to have low latency.
It’s not actually that hard and can be done on any modern computer w/ a lot of RAM easily (albeit takes longer than the near instant application on a camera phone photo).
The images also aren’t that large, just high res for a single focal plane image. I have personally taken WAY higher resolution photos than 3.2 gigapixel before using post-process stitching of hundreds (or thousands in one case) of photos. My highest resolution one I’ve taken was from the roof of a building in San Francisco. It was just over 2,000 twenty five megapixel images using a 400mm lens and produced a combined image over 10x higher resolution (Around 42 billion pixels) than this does.
Fun fact, 3.2Gp can still be stored as a single jpeg. The JPEG file format tops out at 4Gp (64k x 64k). PNG can’t go that high generally because it tops out at a 2GB file size (limited by bytes rather than pixels). For the previously mentioned 40Gp image, the file format I had to use was PSB. It’s been almost a decade since I took that pano, but if memory serves the file was around 150GB.
Not that I have such images to try on so I'm just guessing:
3200 megapixels is around 200x bigger than a typical today's photo, so around 200x slower. It'd also still fit in e.g. 10GB of RAM.
Since applying a filter takes way less than a second on a standard photo (well, depends on what kind of filter you mean, let's say an approximated gaussian blur), single threaded, it'd be around a minute on a huge one, and with e.g. 8 or 16 threads, much faster.
In terms of throughput, not a problem at all. With proper GPU computing techniques you can do full color correction and so on at dozens of Gigapixels per second on a single desktop GPU (bound by memory bandwidth).
The problem is more with bandwidth and working memory. For example, if you'd stream the image to a GPU (at 6.4 GB per image, assuming it's greyscale 16 bit), you're just being bottlenecked by PCIe plain and simple. GPU memory sizes aren't favorable to these sizes, either, most models don't have enough memory to have one input and one output buffer (assuming you also want 16 bit out). So, with a single GPU the bus would limit you to around 1-2 pictures per second.
However, the quoted throughput is "30 TB per night", that's only one GB per second. So it's plausible (but unlikely they do) to process all of the data on a single desktop PC with a GPU and a dual 10 GbE NIC card.
one image (sensor) at a time. We have multiple types of image processing we will do, some of which happens immediately and some of which happens several months later.
Please try to make HN a corner of the web where participating benefits everyone. Perhaps a breakdown or rationale as to why a camera sensor of this resolution will never be that small?
Smartphones have become ridiculously large in the past years, but I think you are a bit too pessimistic if you assume that they fill fit two feet sensors that soon.
Okay, that's just a nerd swipe. What's the field of view! Oh journalism...