I agree with you. There's no reason to run every pixel shader twice in full.
It seems logical that each surface/polygon could be rendered once, for the eye that can see the most of it (a left facing surface for the left eye, a right facing surface for the right eye), then squashed to fit the correct view for the other eye. Then, fill in all the blanks. Of course, the real algorithm would be more complicated than this, but it seems like at least some rendering could be saved this way.
Technically the lighting won't be right, but you don't have to use it for every polygon, and real-time 3D rendering is already all about making it 'good enough' to trick the human visual system, not to be mathematically accurate. If technically-accurate was what we insisted on, games would be 100x100 pixels at 15FPS as we'd insist on using photon mapping.
It seems logical that each surface/polygon could be rendered once, for the eye that can see the most of it (a left facing surface for the left eye, a right facing surface for the right eye), then squashed to fit the correct view for the other eye. Then, fill in all the blanks. Of course, the real algorithm would be more complicated than this, but it seems like at least some rendering could be saved this way.
Technically the lighting won't be right, but you don't have to use it for every polygon, and real-time 3D rendering is already all about making it 'good enough' to trick the human visual system, not to be mathematically accurate. If technically-accurate was what we insisted on, games would be 100x100 pixels at 15FPS as we'd insist on using photon mapping.