"This is fine for browsers, as the heap doesn’t need to be greater than 4 GB anyway."
I develop a photo editor www.Photopea.com , where people often edit e.g. 100-megapixel photos. Then, Chrome may crash (because of 4GB limit) and they lose their work. I have to recommend users to use Firefox for such cases.
Is it really necessary to keep the whole uncompressed image in memory (as well as all undo steps) at all time? I guess the browser environment makes trading RAM for disk space difficult, but Photoshop ran fine with moderate amounts of RAM and plenty of scratch space on disk a few decades ago. It's probably easier to just keep stuff in memory, but perhaps not strictly necessary as not everything needs the same latency.
It definitely should, but keeping your working set smaller can also make things faster (as seen by Chrome here). This depends very much on the workload and what's being done to the data in memory. My example was just that a raster image editing program probably doesn't require a huge memory footprint just to be able to edit images well (as a lot of the memory use typically is not the image you're seeing, but history and undo state, which is neither latency-critical, not frequently accessed).
Chrome's 32-bit address space/4GB limit is different from having 64-bit machine/48-bit address space/4GB of RAM. In the latter, you can keep allocating after 4GB, it will just get slower as the pager will start swapping pages to and from the disk. But in V8 with pointer compression, you will just hit a brick wall.
To swap memory to disk, you still need address space to map it to. Say the editor has allocated 3.4GB, and then makes another 600MB layer, allocated at roughly 0xD000000. With no address space left, it asks for another layer, and the allocator returns NULL. It can't give you a pointer to a 600MB region, because there's no address space left. If you paged out the layer at 0x20000000, that would not help, because it wouldn't magically free up the addresses 0x20000000-0x40000000. They would just refer to pages that are currently on disk, and still be 'occupied' address space. You still need an address for this new allocation, and there is no room to put it.
No slow degradation -- it won't page fault at all unless you otherwise fill the RAM on the machine. So the image editor just falls over, with an uncatchable OOM exception I presume, with no perceptible warning from page fault slowdown just prior. It will go full speed into the brick wall. For your account to be accurate, V8 would have had to implement their own virtual address space, which they have not. VA basically requires a hardware TLB to be fast, and V8's "TLB" here is just `mov eax, [whatever]; and rax, r13`. Anything other than that would have completely defeated the speed gains from locality.
This doesn't account for nuances like whether ArrayBuffers would be allocated elsewhere and have no pointer compression applied, but it's definitely true of general objects. For a regular JS program to fill 4GB with normal web app things would be a miracle, and the image editors of the world can probably still work if they make the big-allocation APIs use full-size pointers.
Keeping your working set smaller doesn't mean that you have to not keep everything 'in memory': you can carefully craft your memory access patterns to work well with OS virtual memory management.
It is true, that I could use mimpaps and show only the preview of a scaled-down version of the image.
But the rendering pipeline of PSD documents is extremely complex. There are not just layers, but also layer styles, raster masks, clipping masks, adjustment layers. Folders of layers can have their own layer styles and masks. You need to allocate separate buffers to render "sub-trees". And everything is GPU accelerated (over WebGL).
I am afraid that remaking it to a mip-mapped system would take me like 1 000 hours of work, so I think it is easier to remake V8 (which I guess could be like 100 hours of work). As the usual capacity of RAM will keep growing, they would have to do it at some point anyway.
The object heap can be up to 4 GB. The contents of ArrayBuffers and the like could live in a separate, much larger, space. I don't know if V8 is doing this, though.
> The reason this works is because the backing stores of array buffers are allocated using PartitionAlloc (I’m not entirely sure if this is still the case, but this was the case approximately 3-4 years ago, and I haven’t seen anything to suggest that it has changed). All PartitionAlloc allocations go on a separate memory region that is not within the V8 heap. This means that the backing store pointer needs to be stored as an uncompressed 64-bit pointer, since its upper 32 bits are not the same as the isolate root and thus have to be stored with the pointer.
If you need help on how handling images larger than your allocatable memory, I seriously recommend to take a look to GDAL lib drivers interface, it's designed to enable best perf from either memory or disk read/writes, possibly on compressed format for the few that allows partial read/writes.
I develop a photo editor www.Photopea.com , where people often edit e.g. 100-megapixel photos. Then, Chrome may crash (because of 4GB limit) and they lose their work. I have to recommend users to use Firefox for such cases.