I've never seen this mentioned, but when I tried these sorts of methods, they mostly only worked on images that started as high quality images and were deliberately down-scaled.
If the image original was low quality, or had artifacts, then the results were useless.
This reminded me of the recent "VFX artists react" video by Corridor Crew[1], they talk about how the artist(s) added a ton of details to a scene, just for almost all of it to be blurred out and hidden by subsequent passes.
The point made was that even though the end result is blurred out (out of focus etc), the brain can tell if it started out looking appropriate or not. So to make a good shot, detail is required.
In that case it's our brain doing the up-scaling of sorts, and the issue seems similar. Down-scaling (blurring) a high-quality image and a low-quality one will result it two images that have critical differences. When those again are up-scaled they will lead to very different outcomes.
Similarly if noise is introduced in the down-scaled (blurred) image, that is in essence the same as changing the image it was down-scaled from, thus up-scaling would again lead to something different.
I am afraid these methods ("learn the typical detail and "re"-apply it") on the said context ("obtain telling detail where we miss it") would be "dreaming" the detail.
There methods are for "beauty", not to "induce real facts": what is "probable" is not what is "real".
and notice the black spot under the iris of the right eye, on the left: does the actual person really have it? It is not in the blurred source image, https://github.com/Janspiry/Image-Super-Resolution-via-Itera... . Now think of an "empiricist" perceptive instrument returning that kind of "noise"...
--
I understand you work on the topic (ANN for assessment), so you surely meant something different that what contextually appears.
If the image original was low quality, or had artifacts, then the results were useless.