Maybe they should find logic gates in the decompression algorithm, use those to emulate a threaded cpu, and launch a multithreaded decoding process on that cpu which finds the original decompression process in memory and executes on the same image being decompressed
The answer was available to Apple all along, if only the technique was patented to promote the progress of science and useful arts so that Apple engineers could be inspired on how to use their own operating system
NSO Group’s iOS exploit was sold to nation states who spied on journalists and dissidents with it. The code behind it got leaked a few days ago and its a mind bending process that essentially did what I described, using an old image decompression algorithm. Its so absurd and also perceptive that making fun of it is the best way to process it.
After a series of steps, the exploit eventually arrives at the need to do general computation. They achieve this by utilizing a part of the JBIG2 decompression algorithm, effectively using the phases of decompression to emulate logic gates to eventually build NAND gates, and from that a simple instruction set for a virtual computer. They then execute arbitrary code on this virtual machine.
> use those to emulate a threaded cpu, and launch a multithreaded decoding process on that cpu
This is a tongue-in-cheek joke I suppose. Even if you built multi-threading support on the virtual machine, it is still an emulated environment running within a single-threaded decompression algorithm. So it is effectively single-threaded on the real machine.
> which finds the original decompression process in memory and executes on the same image being decompressed
The original exploit is quite meta, so OP is just making a meta joke here.
When I read the comment I suspected immediately that it was a joke in reference to the above exploit, but it was cryptic enough that I wasn't quite sure.
There was a recent 0 day in iMessage relating to image parsing, in a open library that Apple used. I’m not sure why the poster is being so vague, but the blog post is an interesting read.
The second comment just seems like a dig at apple in general.
It’s a reference to a post from a few days ago about an iMessage exploit using some legacy PDF decompression algorithm. This comment explains it: https://news.ycombinator.com/item?id=29569155
> I was involved in adding this to PNGs in ~2011 or so. I don't remember the details but you've got this pretty much right.
> The reason was indeed performance: on the first retina iPads, decoding PNGs was a huge portion of the total launch times for some apps, while one of the two cores sat completely idle. My guess is that it's still at least somewhat useful today, even with much much faster single-thread performance, because screen sizes have also grown quite a bit.
Interesting that PNG parsing was a severe bottleneck a decade ago...
> Interesting that PNG parsing was a severe bottleneck a decade ago...
Bottlenecks are relative. If your goal is to shave a couple hundred milliseconds off of app launch times, eventually you have to start digging deep. It may not have been prohibitively slow, but for image-heavy apps it could have been the low hanging fruit that moved them closer to the goal.
Note also that, at least in 2011, all iOS apps would show a PNG poster image prior to loading the actual interface. So this may have been more about the "perceived" app launch time than the real one.
> Note also that, at least in 2011, all iOS apps would show a PNG poster image prior to loading the actual interface. So this may have been more about the "perceived" app launch time than the real one.
The LaunchImage was definitely about perceived launch time of an app. The idea is you show an image that was similar to the main screen of your app and once the UI initialized it would just magically swap that for the static LaunchImage. So you'd want a really fast PNG decoder to be able to get it on screen and let the CPU do the work of initializing the application.
I'm surprised too; in the Windows world, two decades ago, "skins" were popular and those essentially turned every single UI element into a bitmap. On the hardware of that time, their impact on performance was pretty minimal.
Of course, "every UI element is a bitmap" is not a good idea anyway, for other reasons, but seems to have someone become the norm for web and mobile.
The alternative isn't a different raster image format — it's the native drawing commands of the underlying graphics system, like QuickDraw on classic Macs or GDI on classic Windows. They work like a plotter: moving a cursor to a position on the pixel grid and giving it drawing commands. You can see the residue of those old graphics systems in the various "metafile" image formats like PICT/WMF/EMF:
> The alternative isn't a different raster image format
Except that we're in a conversation about the relative resource consumption of bitmaps and png, not a conversation about what the best way to draw UI elements is.
> Bitmaps take memory, but are quite efficient from a runtime perspective because they aren’t encoded.
Memory and bandwidth. To decode/display an image it's got to be loaded from storage. In the case of a Retina display iPad that's 2048x1536px. With a 24-bit color depth that's a 9MB bitmap to load from disk (SSD or whatever).
I just did a test with ImageMagick. The same 2048x1536 image encoded to an uncompressed Windows bitmap is 9MB while the same image as a PNG (PNG24, zlib level 9, with adaptive scanline filtering) is only 4MB. That's less than half the data to load off disk.
Unless CPU power is at an absolute premium you're probably better off with an image with lossless compression rather than completely uncompressed.
> Unless CPU power is at an absolute premium you're probably better off with an image with lossless compression rather than completely uncompressed.
Really depends on the exact situation. If your image decompression algorithm runs at 100 MB/s, but your disk bandwidth is 250 MB/s and you have plenty of space...
Assuming the on-disk version of a graphic is compatible with your particular GPU. Even within the same manufacturer two generations of GPUs might not use the same image format.
A generalized on-disk representation is safer and likely more forwards compatible.
That seems like an interesting use case for resource forks. Decompress once and store in a resource fork of the original PNG. If storage space becomes an issue, those resource forms can be purged along with other caches.
If I recall correctly, the iPad/iPhone-compiled PNG files had the color channels optimized for the graphics hardware of the devices as well - they wouldn't decode properly anywhere else.
Yes, but "optimized" is overselling it. It stored RGBA in a premultiplied form.
IMHO it was an unnecessary spec breakage, and a misplaced focus. The most expensive part of decoding PNG is zlib inflate, so any reduction in input data is more important than post-processing savings.
If Apple stored app launch images (the feature it's been invented for) as opaque RGB, then colorspace conversion would also be free, and they'd also be saving a quarter of the most expensive stage of decoding. Instead, they've kept the big expensive part big and expensive, and shaved off 3 instructions from the almost free part.