> With PRIMO, computers analyzed over 30,000 high-fidelity simulated images of black holes accreting gas.
That means that the new image better conforms to the simulated data. I understand why it makes sense to do this (simulated data generated following the laws of physics, etc.) but it seems to me that the original point of the EHT was to experimentally verify that black holes actually look as we've modeled them for decades, so sharpening images using modeled data seems somewhat backwards.
I can only assume so, given the nature of the experiment. But it seems counterintuitive to add yet more of it. Naively speaking, it feels a bit like the actual data is being "guided" towards the simulation.
However, I have zero subject matter expertise with physics, so my intuition is probably not worth much.
Since the EHT is an interferometer, the observations contain less information than what would be available from a single telescope of the same equivalent size. Therefore to reconstruct the 'full image' you need to find the model which best fits your interferometric observables. As far as I can tell, what they've done with PRIMO is basically a fancy version this kind of modelling. The data isn't necessarily being guided towards the simulations, it's more like we have a better computational technique to precisely fit the data.
As I had to explain to the execs at a lab on a chip startup I was helping to put together, you're generating the proposition of what the data might be that still needs to be qualified with physical testing, and not actual data.
This concept seems to be worryingly lost in the flurry of excitement at using ML/AI in academic research by some.
The data clearly supported the hole in the middle, and only reconstructions with a hole did fit the data. The data support the hole and that one side is brighter, but not much more than that. Thus a blurred image does not make one believe that we know more than we do.
Yes, but those choices had good scientific basis which was the best we could do at that time.
They are just starting data collection at a higher frequency band that will allow higher resolution. Unfortunately our baseline is currently limited by the diameter of the earth so the only way to get sharper data is by using shorter wavelengths.
Interferometry relies on measuring the interference pattern between two points simultaneously measuring incoming radio waves. Each element of a baseline must be measured at the same time.
If we were to allow the Earth to rotate around the Sun and measure components of the same baseline at different times, we would violate this.
I don't know what exactly the tradoffs are, but I suspect this approach has a lower sensitivity due to size of the dishes, it's more difficult to get enough telescopes to form a good image, and transmitting the data back is likely to be a challenge (the black hole observations were shipped on hard drives instead of transmitted via the internet. Even achieving a broadband-speed transmission rate with a deep space object is difficult)
The original image was an amalgamation of several different reconstruction techniques with varying dependence on prior information. They also had data driven techniques during the original analysis but I don't think they were used to generate the final image shown to the public
My first question here was how do we know the results are not 'hallucinated' or infilled by the ML system?
The fact that it was fed simulated data makes this really circular. If the goal is to get hard real-world data to validate the simulations, then training the machine that generates an "improved" data set on the simulation data itself would greatly increase the likelihood that the results will match the inputs, but not because that is what is actually out in reality.
Obviously, the people doing this are vastly more expert than I am, but it'd be good to see an explanation of how they are avoiding this fundamental issue.
Does this mean that the AI/humans apocalypse is also self fulfilling since not once have I come across a prophetic use of AI that did not end in the AI attempting to wipe out the humans
"Our imaging system lacks sufficient resolution to confirm or disprove our theories. But we can achieve better resolution by generating images based on our theories, and combining those generated images with the data from the imaging system. We are astonished to find that the resulting high-resolution images confirm our theories."
No, but it is worth being careful about what such fitting means: What it's saying is "given this set of theories about what the black hole might look like, here's what it most likely looks like" (and hence what theory of those in the set is most likely). That's still usefully improving your knowledge, even though in theory there could be some other shape it looks like with theories you have not considered. And stating with the prior it could look like literally anything is not necessarily reasonable: no-one expects that it would look like a tree, for example, and that's because of other things we have observed about the universe. Science, especially astrophysics and cosmology, is a lot like a gigantic crossword: the clues and answers to other parts of the puzzle constrain the other parts, and while it's theoretically possible that there is some completely different model for everything that fits all the observations done so far, generally the bits which are less clearly known still have a pretty well-defined shape, because they are constrained by other information and the models which fit that.
That means that the new image better conforms to the simulated data. I understand why it makes sense to do this (simulated data generated following the laws of physics, etc.) but it seems to me that the original point of the EHT was to experimentally verify that black holes actually look as we've modeled them for decades, so sharpening images using modeled data seems somewhat backwards.