Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t think the argument holds very well. Apple has made three processor architectural changes including one on the current OS. “Too complicated” doesn’t really align with the execution history Apple has. The fat binary support Apple has is a huge tool and the ability to add instructions and optimizations to their chips to help with x86 emulation is a big deal.

I don’t know if Apple will ever actually do this, but it seems odd to suggest it’s not feasible given past performance and their current technology holdings.



Apple has made two transitions while retaining compatibility, in one case with emulation of the old architecture being around the speed of the previous (68k to PPC) and in the other with emulation being faster than the previous architecture (PPC to x86). There's no ARM that's fast enough to emulate an x86 in the same power envelope, so right now any transition would either require all performance sensitive apps to be ported or would result in machines that were slower for many tasks.

Bear in mind that both previous transitions were due to the processor line Apple was using being effectively EOLed (explicitly in the case of 68k, more implicitly in PPC - nobody was interested in making CPUs that had the appropriate performance/power ratio for consumer machines). Apple is doing great things with ARM, but they're not /that/ far ahead of the rest of the industry that they can pull off a seamless transition in the near future.


Rosetta apps were not faster in emulation. This gets repeated a lot and while the Intel Macs were clearly faster at native apps, Power Macs were still faster clock for clock at their own PPC code. Barefeats did some head to heads; at least one benchmark has a Quad G5 beating a Mac Pro 3.0GHz on PPC software, then Apple's top of the line, and the 2.66GHz was clearly slower.

https://barefeats.com/quad06.html

Yes, the Mac Pro was way faster on its own turf, but Rosetta's overhead wasn't overcome by that. Not until the wide availability of Universal apps was the speed advantage apparent.


> ...with emulation being faster than the previous architecture (PPC to x86).

This is not the history that I remember, at least with contemporary hardware. In my recollection, PPC emulation was generally slower, but it didn't necessarily really matter.

> There's no ARM that's fast enough to emulate an x86 in the same power envelope, so right now any transition would either require all performance sensitive apps to be ported or would result in machines that were slower for many tasks.

I don't think that's true, at least at the lower end.

Whether it is running x86 emulation or native, the Snapdragon 835 comes pretty close to the Celeron N3450, at least in terms of single-core performance. Both chips have a comparable power envelope.

https://www.techspot.com/review/1599-windows-on-arm-performa...

Emulation adds overhead of course, but it's worth noting that most ISAs are effectively emulated through micro-operations:

https://en.wikipedia.org/wiki/Micro-operation


>Emulation adds overhead of course, but it's worth noting that most ISAs are effectively emulated through micro-operations

That's not even remotely the same thing because that micro-code is optimized to the ISA of the processor and it's obviously specific to the microarchitecture of the processor which is again optimized for a specific ISA. It's like translating a Wikipedia article from normal English to simple English. You didn't cross a complicated language barrier.

If I may add my own comment: It's not worth noting at all because your ARM CPU still only implements ARM optimizations and your x86 CPU only implements x86 optimizations.

If you had proposed adding hardware acceleration to make emulation of a specific architecture faster then maybe one could have squinted and said it's emulation instead of secretly implementing a x86 CPU in your ARM CPU.


I don't think it's that different. Microcode can work efficiently with multiple ISAs. In practice, x86 and amd64 are actually fairly different targets. With amd64 you have far more registers and no x87 weirdness, for instance. ARM chips also generally support multiple ISAs.

Of course you have trade offs regarding optimizations, but that's true at multiple levels. For example, many exotic x86 instructions (like BCD) aren't as optimized as they could be.

> If you had proposed adding hardware acceleration to make emulation of a specific architecture faster then maybe one could have squinted and said it's emulation instead of secretly implementing a x86 CPU in your ARM CPU.

Of course I'm talking about hardware-level support. I'm talking about the things that already exist.


Do the ARM companies besides Apple even care about desktop long-term? There seems to be enough billions to be made in Mobile and IoT that they can keep focusing on that indefinitely.


Apple also obsoletes hardware without making any transition or major software or hardware changes. I can develop for latest android is on my 2009 macbook but not for latest Mac or iOS. I can probably install latest MacOS in virtual box on 2009 macbook but not on actual hardware...


Apple sells iPad Pros which they call “computers”. Those are powerful devices, but there is not much productivity software available, and UI is very different from traditional desktop interfaces. Apple could offer an Air-class machine powered by ARM with a familiar macOS, and most developers would be able to support it in a matter of weeks, if not days.


There is Microsoft Office and Adobe has previewed full Photoshop on it. What other mainstream productivity software do most people need?


Visual Studio?


I did say “mainstream”. Despite the HN bubble, development is not in the “mainstream”. Besides, personally, I can’t stand developing on just a laptop screen. I need at least one external monitor and preferably two.

If I had to do a lot of development on the go, I would have invest in a portable USB monitor or use my iPad as a second display.


>but there is not much productivity software available

Huh? There is more productivity software available for iPad's than for a PC in 1995.


The iPad doesn't compete against PCs from 1995 though, and PCs from 1995 didn't compete with PCs from 2019.


From personal experience: the slowest PPC Mac the 6100/60 was about the speed of the LCIi (68030-16Mhz with a 16 bit bus) under emulation. It was much slower than my LCII with a 68030-40Mhz card. It wasn’t until Connectix came out with a better emulator that emulation approaches tte speed of my old accelerated LCII. It also didn’t help that parts of the operating system were still emulated.


Furthermore I predict that Apple won’t treat it like a transition, rather it will be a long term dual platform strategy. They’ll encourage developers to build fat binaries and have good x86 translation support in the interim.

But it’ll be long term, at least five years. Standard iMacs and MacBooks will be moved over to ARM quickly, higher-end iMacs and possibly some MBP SKUs a few years later, pro devices will stay Intel indefinitely.

The transition only needs to end if and when Apple can beat Intel/AMD for uncompromising high performance. Maybe that will happen, but until then, fat binaries are a perfect solution.


There’s definitely some common threads with this and Apple’s push to have apps use bitcode. It would make this duality a lot easier from an app distribution perspective without the need to have an emulation layer. My only guess as to why Apple hasn’t required bitcode by now is the sheer number of 3rd party libraries out there that aren’t built with bitcode enabled.


Chris Lattner has stated several times that bitcode does not help with porting to other platforms though


He’s said since that while that was true when he left Apple, they have clearly made strides in the direction of making bitcode platform-neutral. Notably the shift from 32-bit to 64-bit ARM on the Watch was totally transparent to developers; they didn’t even need to resubmit their apps. And that’s not a small architectural change.


Can’t you point me to this quote? Not doubting, just want to read more on the topic.



Thanks


Only when talking about standard bitcode that everyone has access to directly from LLVM.

Apple's toolchain uses a modified version of bitcode, making it more portable.

There is a session on WWDC 2019 on how using bitcode made the 64bit transition of watchOS apps relatively easy.


they are forcing notarization in the next OS-X update. this means they can eventually force bitcode.


They don't need to force anything: Apple will simply release the hardware and make it straightforward for developers to build fat binaries. And why wouldn't developers eagerly comply?

— The vast majority of programs will probably recompile with no changes.

— Many developers who will need to make ARM-specific optimisations will have already done that work for iOS releases. Ditto for graphics optimisations for Apple's custom graphics cores.

— For the remaining developers, the skill of optimising for Apple's ARM CPU and GPU is already mature in the marketplace.

— Most apps which don't fall under the aforementioned categories are probably high performance apps that won't be important for buyers of the smaller iMac and MacBook Air.

For a platform with a history of smooth transitions, this would be the easiest "transition" in Apple's history.


The key argument is that transitioning low to mid end Macs isnt enough. They would also need to transition the high end pro machines. That would require engineering super high performance, massively multi core ARM CPUs with huge IO throughput, that would only sell in the tens of thousands of units range.

There is just no way on earth they could ever be economically viable. The economies of scale are just dire. Intel can only do it because they sell many hundreds of thousands, or even millions of these high end CPUs, not just a few tens of thousands.

So the real point of contention is, does it make sense to only transition part of the Mac lineup to ARM if this is correct?


I know this isn’t your argument, but that doesn’t make sense either.

Why hold back the entire Mac line for a model that might sell a few 10K units? Especially since Apple doesn’t know if the new Pro is really viable at all.


I think the argument against building custom cpu for Mac Pro is pretty good, as this puts them in a situation that is different from past architecture changes.


The future of Pro machines is largely about VMs. Heck, the recent past of Pro machines is largely about VMs — that’s the reason I so often hear people complain about 16GB ram limits, for example.

In that world, I wouldn’t be surprised to see a Mac Pro running a (few?) very high core-count cpus(s) underneath an Apple/ARM based software stack, even if the cpu comes from Intel or AMD. That would let a putative Mac Pro run the same software as the battery-friendly laptops.


You don't need a VM to run Photoshop. The whole point of the desktop Pro machines is high-performance computing for creative applications. VMs have no place in that. RAM is an issue because it's a limiting factor for many rendering and editing applications.

Laptop Pro is arguably different, but I suspect content creators still outnumber developers by a significant ratio.


Many OS instances nowadays run on top of a type 1 hypervisor, Photoshop doesn't even notice it.


VMs do have a place for that now: many shops run thin clients into a cluster for workstation uses.


The past cases are qualitatively different - an architecture switch to much more powerful processors with some degree of software backward compatibility provided. That's not yet a practical option.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: