Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Connecting the iDOTs (2020) (hackerfactor.com)
72 points by ErikCorry on Dec 19, 2021 | hide | past | favorite | 36 comments


Maybe they should find logic gates in the decompression algorithm, use those to emulate a threaded cpu, and launch a multithreaded decoding process on that cpu which finds the original decompression process in memory and executes on the same image being decompressed


Errr, is this some joke related to ForcedEntry?

edit: the name of NSO's iMessage zero click exploit is ForcedEntry


The answer was available to Apple all along, if only the technique was patented to promote the progress of science and useful arts so that Apple engineers could be inspired on how to use their own operating system


Can you explain it plainly? I seems like there's a funny joke or irony in there but I don't have the experience to detect it properly.


NSO Group’s iOS exploit was sold to nation states who spied on journalists and dissidents with it. The code behind it got leaked a few days ago and its a mind bending process that essentially did what I described, using an old image decompression algorithm. Its so absurd and also perceptive that making fun of it is the best way to process it.


> Maybe they should find logic gates in the decompression algorithm

There is an iMessage zero-click exploit from NSO Group used in the wild. Project Zero did a deep dive on it:

https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...

After a series of steps, the exploit eventually arrives at the need to do general computation. They achieve this by utilizing a part of the JBIG2 decompression algorithm, effectively using the phases of decompression to emulate logic gates to eventually build NAND gates, and from that a simple instruction set for a virtual computer. They then execute arbitrary code on this virtual machine.

> use those to emulate a threaded cpu, and launch a multithreaded decoding process on that cpu

This is a tongue-in-cheek joke I suppose. Even if you built multi-threading support on the virtual machine, it is still an emulated environment running within a single-threaded decompression algorithm. So it is effectively single-threaded on the real machine.

> which finds the original decompression process in memory and executes on the same image being decompressed

The original exploit is quite meta, so OP is just making a meta joke here.

When I read the comment I suspected immediately that it was a joke in reference to the above exploit, but it was cryptic enough that I wasn't quite sure.


There was a recent 0 day in iMessage relating to image parsing, in a open library that Apple used. I’m not sure why the poster is being so vague, but the blog post is an interesting read.

The second comment just seems like a dig at apple in general.


and the software patent system with the supporting wording in the US constitution

there are levels to this absurdity


you have quite the curious style in conveying information implicitly


It’s a reference to a post from a few days ago about an iMessage exploit using some legacy PDF decompression algorithm. This comment explains it: https://news.ycombinator.com/item?id=29569155


the exploit is called ForcedEntry


Oh didn’t realize. I’ll leave the link in case anyone wants to read about it though.


A commenter to that article (nolen) wrote:

> I was involved in adding this to PNGs in ~2011 or so. I don't remember the details but you've got this pretty much right.

> The reason was indeed performance: on the first retina iPads, decoding PNGs was a huge portion of the total launch times for some apps, while one of the two cores sat completely idle. My guess is that it's still at least somewhat useful today, even with much much faster single-thread performance, because screen sizes have also grown quite a bit.

Interesting that PNG parsing was a severe bottleneck a decade ago...


> Interesting that PNG parsing was a severe bottleneck a decade ago...

Bottlenecks are relative. If your goal is to shave a couple hundred milliseconds off of app launch times, eventually you have to start digging deep. It may not have been prohibitively slow, but for image-heavy apps it could have been the low hanging fruit that moved them closer to the goal.


Note also that, at least in 2011, all iOS apps would show a PNG poster image prior to loading the actual interface. So this may have been more about the "perceived" app launch time than the real one.


> Note also that, at least in 2011, all iOS apps would show a PNG poster image prior to loading the actual interface. So this may have been more about the "perceived" app launch time than the real one.

The LaunchImage was definitely about perceived launch time of an app. The idea is you show an image that was similar to the main screen of your app and once the UI initialized it would just magically swap that for the static LaunchImage. So you'd want a really fast PNG decoder to be able to get it on screen and let the CPU do the work of initializing the application.


I'm surprised too; in the Windows world, two decades ago, "skins" were popular and those essentially turned every single UI element into a bitmap. On the hardware of that time, their impact on performance was pretty minimal.

Of course, "every UI element is a bitmap" is not a good idea anyway, for other reasons, but seems to have someone become the norm for web and mobile.


Bitmaps are special in that they don’t require a decompression step, so of course they don’t have the overhead of a decompression step.


The alternative isn't a different raster image format — it's the native drawing commands of the underlying graphics system, like QuickDraw on classic Macs or GDI on classic Windows. They work like a plotter: moving a cursor to a position on the pixel grid and giving it drawing commands. You can see the residue of those old graphics systems in the various "metafile" image formats like PICT/WMF/EMF:

https://en.wikipedia.org/wiki/PICT

https://en.wikipedia.org/wiki/Windows_Metafile


> The alternative isn't a different raster image format

Except that we're in a conversation about the relative resource consumption of bitmaps and png, not a conversation about what the best way to draw UI elements is.


>They work like a plotter: moving a cursor to a position on the pixel grid and giving it drawing commands.

This isn't how GPUs work meaning it will be slow.


Bitmaps take memory, but are quite efficient from a runtime perspective because they aren’t encoded.


> Bitmaps take memory, but are quite efficient from a runtime perspective because they aren’t encoded.

Memory and bandwidth. To decode/display an image it's got to be loaded from storage. In the case of a Retina display iPad that's 2048x1536px. With a 24-bit color depth that's a 9MB bitmap to load from disk (SSD or whatever).

I just did a test with ImageMagick. The same 2048x1536 image encoded to an uncompressed Windows bitmap is 9MB while the same image as a PNG (PNG24, zlib level 9, with adaptive scanline filtering) is only 4MB. That's less than half the data to load off disk.

Unless CPU power is at an absolute premium you're probably better off with an image with lossless compression rather than completely uncompressed.


> Unless CPU power is at an absolute premium you're probably better off with an image with lossless compression rather than completely uncompressed.

Really depends on the exact situation. If your image decompression algorithm runs at 100 MB/s, but your disk bandwidth is 250 MB/s and you have plenty of space...


True, bandwidth is a concern. It might be possible to copy memory directly from disk to the GPU, which may be more efficient.


Assuming the on-disk version of a graphic is compatible with your particular GPU. Even within the same manufacturer two generations of GPUs might not use the same image format.

A generalized on-disk representation is safer and likely more forwards compatible.


Bitmaps take both bandwidth and space, either can be an issue.


That seems like an interesting use case for resource forks. Decompress once and store in a resource fork of the original PNG. If storage space becomes an issue, those resource forms can be purged along with other caches.


It’s the first thing the app does. Or used to. Find an image and display to user while loading rest of the app.


Seems related to how apple handles PNGs like the recent discussion about ambiguous PNGs https://news.ycombinator.com/item?id=29586549


Yeah, we’ve come full circle now. This was the first post: https://news.ycombinator.com/item?id=29573792

Then someone posted the GitHub repo, and now someone has the posted the link that provided the initial insight (linked to on the original page)


I was wondering if exactly that was possible. Generate an iDOT that leads to chunks a "regular" renderer would never look for.

Really curious that it happens in Safari sometimes though...


> (The odd capitalizations are used as binary flags and determines whether a chunk is required or optional, and when it is retained or removed.)

I found this commingling of human-readable data and metadata quite novel. I can't bring to mind another format which does this.


> The only value I have ever seen is "0x40". (The iDOT chunk contains 28 bytes, and this value is 0x28 in hex. So maybe this is related?)

This is backwards, I guess? 0x40 is clearly not 0x28; nor is 28 0x28; but 0x28 is 40 (decimal).


If I recall correctly, the iPad/iPhone-compiled PNG files had the color channels optimized for the graphics hardware of the devices as well - they wouldn't decode properly anywhere else.


Yes, but "optimized" is overselling it. It stored RGBA in a premultiplied form.

IMHO it was an unnecessary spec breakage, and a misplaced focus. The most expensive part of decoding PNG is zlib inflate, so any reduction in input data is more important than post-processing savings.

If Apple stored app launch images (the feature it's been invented for) as opaque RGB, then colorspace conversion would also be free, and they'd also be saving a quarter of the most expensive stage of decoding. Instead, they've kept the big expensive part big and expensive, and shaved off 3 instructions from the almost free part.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: