Although you still have four frames of latency if you do it that way. All the audio has to be delayed to match. Parallelized decode of JPEG 2000 frames is quite possible, though.
The choice of JPEG 2000 is unexpected. Most of the neat features of JPEG 2000 are useless for cinema. Being able to construct a low-rez version from a truncated file isn't useful. Nor is the ability to divide the image into tiles and decompress different tiles at different resolution. (That's used in JPEG 2000 medical and military imagery, where you want to zoom in on the interesting part and see it in lossless mode.) You can have more than RGB or RGBA layers, which the multispectral imagery and prepress people like. Maybe the advantage is that you can have more than 8 bits of color depth.
> Maybe the advantage is that you can have more than 8 bits of color depth.
Yes, that and the (at the time) near state of the art compression efficiency. I remember reading a technical document where the engineers designing the standard argued for 12 bits per component based on experiments and studies they conducted.
The ability to read a lower rez version of the images is a feature that is actively used. That way you don’t need to have both a 2K and 4K DCP for movies that have a 4K version, 2K projectors can simply decode the 4K DCP at 2K resolution.
The choice of JPEG 2000 is unexpected. Most of the neat features of JPEG 2000 are useless for cinema. Being able to construct a low-rez version from a truncated file isn't useful. Nor is the ability to divide the image into tiles and decompress different tiles at different resolution. (That's used in JPEG 2000 medical and military imagery, where you want to zoom in on the interesting part and see it in lossless mode.) You can have more than RGB or RGBA layers, which the multispectral imagery and prepress people like. Maybe the advantage is that you can have more than 8 bits of color depth.