> I don't think it has anything to do with display technologies though.
I think your two examples nicely illustrate that it's all about the display technology.
> The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?
That entirely depends on how the resizing is done. Usually people choose nearest neighbor in scenarios like that to be faithful to the original 100x100 display, and to keep the images sharp. This treats the pixels as squares, which means the programmer should do so as well.
> Alvy Ray Smith, the author of this paper, was coming from the background of developing Renderman for Pixar.
That's meaningful context. I'm sure that in 1995, Pixar movies were exposed onto analog film before being shown in theatres. I'm almost certain this process didn't preserve sharp pixels, so "pixels aren't squares" was perhaps literally true for this technology.
> Usually people choose nearest neighbor in scenarios like that to be faithful to the original
Perhaps I should have chosen a higher resolution. AIUI, in many modern systems, such as your OS, it’s usually bilinear or Lanczos resampling.
You say that the resize should be faithful to the “100x100 display”, but we don’t know whether it was used from such a display, or coming from a camera, or generated by software.
> I'm almost certain this process didn't preserve sharp pixels
Sure, but modern image processing pipelines work the same way. They are working to capture the original signal, with a hopeful representation of the continuous signal, not just a grid of squares.
I suppose this is different for a “pixel art” situation, where resampling has to be explicitly set to nearest neighbor. Even so, images like that have problems in modern video codecs, which model samples of a continuous signal.
And yes, I am aware that the “pixel” in “pixel art” means a little square :). The terminology being overloaded is what makes these discussions so confusing.
I think your two examples nicely illustrate that it's all about the display technology.
> The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?
That entirely depends on how the resizing is done. Usually people choose nearest neighbor in scenarios like that to be faithful to the original 100x100 display, and to keep the images sharp. This treats the pixels as squares, which means the programmer should do so as well.
> Alvy Ray Smith, the author of this paper, was coming from the background of developing Renderman for Pixar.
That's meaningful context. I'm sure that in 1995, Pixar movies were exposed onto analog film before being shown in theatres. I'm almost certain this process didn't preserve sharp pixels, so "pixels aren't squares" was perhaps literally true for this technology.