In some ways, it’s been like that for a few years now. Modern mobile phone camera systems will take multiple individual pictures, align them for any movement, average out temporal noise if there was no motion, create an HDR image from multiple frames captured at different exposure settings, detect different sub-scenes (for example an indoor room and an outdoor scene viewed through a window) and apply individual exposure corrections, identify and isolate faces and apply pleasing lighting corrections, and much more.
A big portion of the silicon area of the iPhone SoC is dedicated purely to this image processing pipeline.
Exactly. Image compensation and the concoction of an HDR image are the visual equivalent of a loudness enhanced audio file.
Loudness enhancement reduces the dynamic range of the source, and HDR enhancement also removes information to allow different information to be more easily perceived by the human.