I hope that the possibility of an industry-wide switch to ARM doesn't take away the flexibility we now have with the PC. Standardized parts and a well defined firmware interface (and thus "standard" bootloaders instead of a multitude like on ARM) is what makes the PC truly special. Moving to a world where you can't install Linux or where every single machine has incompatible firmware would be disastrous, IMHO.
Current PCs can last for decades thanks to modern Windows and Linux being able to boot on old machines, without the need of special bootloaders or new firmware, just somewhat generic drivers (see AHCI, Intel HDA sound, etc)
>I hope that the possibility of an industry-wide switch to ARM doesn't take away the flexibility we now have with the PC
I think that ship has long sailed as we failed to treat smartphones as handheld computers and subject the smartphone/OS manufacturers to the same scrutiny as computer/OS manufacturers.
This is a missed opportunity especially to those countries where smartphones are the first computer to majority of the population, now their computing can be held hostage by couple of companies just like their data.
Now computers with only manufacturer approved OS, only manufacturer approved software (even content?), arbitrary security updates/ OS upgrade cycle is already becoming a reality.
We had that in the form of Windows CE based smartphones. As open as a PC, no walled garden, no app stores, just load your app and launch it. The market spoke decisively in rejecting it in favor of the iPhone and Android devices.
>The market spoke decisively in rejecting it in favor of the iPhone and Android devices.
Sure, but did the market reject it because it was a more open platform, or because of other considerations? Given that the general public is largely unaware of DRM and the like, it was likely for lack of other features. There's no inherent reason why a more open phone OS couldn't succeed in the market. After all, Android devices are generally more open than iOS devices and they've got the vast majority of the smartphone market.
>Sure, but did the market reject it because it was a more open platform, or because of other considerations?
Obviously other considerations.
Where there is only a small handful of options within a class of product, any statement about "what the market wants" should be limited to very broad strokes. The consumer doesn't select from the entire possibility space, they buy from the available options.
Any "correlation is causation" statement about minor details when there are dozens of other larger distinguishing feature are the words of a lying huckster. Giving them the benefit of the doubt isn't that they're not-lying, the benefit of the doubt is that they're lying to themselves too. That's how wrong it is to make those inferences.
I did make the context quite narrow and add the caveat for people who are simply deluding themselves, but yeah, it's rude. I've spent enough of my professional life listening to people try to tell me what sales numbers "mean" to have any higher of an opinion.
> "Sure, but did the market reject it because it was a more open platform, or because of other considerations?"
My point was that openness didn't outweigh other considerations in the eyes of people purchasing them.
Windows Mobile smartphones were actually dominant in the market until the iPhone was introduced. IIRC from the industry data I had access to, there were over 100 different models in a variety of form factors from a couple dozen OEMs concurrently being sold at its peak. So it was at least ahead of all the alternatives before that point, which weakens the argument that it was "other considerations" that were the main culprit.
They weren't as open as you remember for most people using pre-iPhone smart phones. Carriers did all sorts of things to cripple reduce functionality so they could sell it back to you.
A major factor of the iPhone's early US success was forcing a major US carrier (AT&T) to stop doing consumer hostile things and just let the device connect to the network as intended.
So, in part, it was improved openness that drove iPhone/Android's success. It just was network openness instead of device openness.
Are you really arguing the that Windows CE platform was more open than Android?
Back in 2005, an open source Linux based phone was a hippie pipe dream, and carriers were FIRMLY against allowing anything open source on their networks. Android blew all of that wide open, and now the most popular phone OS in the world is an open source Linux based OS. This is what we were all dreaming about back then. I feel like this is an incredibly underappreciated fact.
Yes, some carriers lock down their Android phones, because it's an open platform, so carriers can do what they want with it, but since the dawn of Android there have always been high end Android phones that were trivial to put into developer mode and install whatever wacky OS mods you want.
Except the Android of today has not much in common with the Android from ten years ago. Android might be open sourced but is by no means like the linux that you'd run on your PC and is being increasingly built to wall you into Google's services while Samsung and the rest try to remove Google as much as possible from the equation and sway you to their services. Basically you're the cashcow they try to heave around. What a mess to be in.
Oh, and let's not forget that while Android itself is open source, the Android on your phone right now is full of binary blobs for even the basic shit. Camera driver? Image processor? Fast charging? LTE modem? Fingerprint sensor? OLED calibration? most of them are binary blobs specific either to the chip/sensor manufacturer or the OEM who designed the phone as they consider this stuff proprietary IP they wish to keep secret from the other Android OEM competitors. Move to LineageOS and you risk loosing a lot of your phone's fancy features because of that.
I love my Androids but it's still a mess and the openness is slowly going away(since Android 9 we lost call recording capability which was a huge blow for my use case). At least we still have f-droid.
Only devs that don't code for Android can ever consider it an open source Linux based OS.
The fact that it uses Linux kernel and related features is an implementation detail, only visible to Google and OEMs.
What regular developers see is a Java/Kotlin based userland, with a native layer having a set of predefined stable APIs, which includes ISO C, ISO C++ standard libraries, a POSIX subset, GL/Vulkan and a couple of Android specific ones.
Tomorrow Android can be another Fucshia subsystem, and again, only Google and OEMs will notice the change.
> now the most popular phone OS in the world is an open source Linux based OS
That's an interesting comment, because it really calls attention to how us nerds often perceive things differently from how the rest of the world does. I am guessing that most consumers would not recognize something that just used vanilla AOSP as an Android phone. Google's closed-source add-ons are a huge part of what people think of when they think, "Android."
> "Are you really arguing the that Windows CE platform was more open than Android?"
The topic is the control that the user has over a phone they purchased, not source code availability for the OS, so yes, I am. Windows CE smartphones provided access about the same as Windows 98/NT in terms of the freedom to run apps and providing apps unfettered access to OS level APIs. The same can't be said for Android and Apple's sandbox + walled garden model.
The only Linux smartphone I can think of from that era that offered similar access and control was the (I think?) Nokia N900 and interest in those was IIRC negligible.
I feel like you're trying to sell the utter lack of a security model as some sort of feature.
Power users and developers have always had the ability to circumvent the Android sandbox on their own devices.
Windows CE just never tried to implement security. Malicious apps had unfettered access to everything, just like in most Windows based ecosystems.
N900s were pretty rad little toys, but anything you could do with them you could also always do on Android. The base Android OS doesn't have a built in package manager, but you can easily run a more classic Linux distro in a chroot or something.
Linux Deploy has been around for almost a decade now.
> "I feel like you're trying to sell the utter lack of a security model as some sort of feature."
I'm just describing the way things happened to be back in that era.
> "Windows CE just never tried to implement security. Malicious apps had unfettered access to everything, just like in most Windows based ecosystems."
Yes, and? That's just as true of early PalmOS and the other early mobile OSes. Security against malicious apps just wasn't a consideration at that time on smartphones/PDAs precisely because there was no app store and no significant quantity of malicious apps.
Can I boot Linux on an iPhone? Install android instead?
I don't think we ended up in a place much better off from a freedom perspective.
Android is a little better because parts of it are open. Google has been steadily moving things into play services though, which is not open. There is now an open source replacement for play services - I don't think it's very compatible yet though.
> Google has been steadily moving things into play services though, which is not open.
This is a good thing, they've been de-googling the base OS and making it as stripped down and bare-bones as possible.
This is a huge win for security, privacy, and freedom in general, and exactly what they should be doing. Android should be a small, secure Linux based operating system with an API that makes it easy to build services on top of it. The Google ecosystem should be completely decoupled from the OS.
I'm relatively ambivalent to a walled garden if and only if there's a (reliable) way to opt out of it. In Android's case, it's not only quite possible to opt out of Google's garden, but arguably rather commonplace (Amazon's "Fire" devices, most custom ROMs by default, etc.). In iOS' case, it's outright impossible.
My experience is quite the opposite: they are hardcoding more things to be hard to replace - you can only have /system apps not affected by forced battery saving limits, the new file-based encryption is moot on unlocked bootloader etc.
The market rejected it because it had crappy UX. Those smartphones were marketed for balding dads in white shirts, working in cubicles. Steve Ballmer types.
The iPhone had 60 fps smooth animations, a built-in iPod, a huge capacitive touchscreen, and most importantly none of the baggage of being associated with balding managers in their 40s. It was sexy. It was a fashion statement.
> "Those smartphones were marketed for balding dads in white shirts, working in cubicles. "
While true, I wouldn't say that's a burn because it was exactly those people who could afford smartphones and justify paying the painfully high bandwidth charges of that era. Remember, those phones cost as much as a iPhone today but in 2005 dollars (ouch). The ubiquitous Blackberry, a.k.a. the "Crackberry", and Palm devices were also aimed at the same business and professional market.
These early devices paved the way to the current smartphone market.
Hehe, true! Cubicles are much better from a privacy perspective, but the startup kids tore them down on the dubious pretext of hindering collaboration. It was only a matter of time before BigCos followed suit, if only for the obvious cost saving reasons.
I don't mind open offices for the 5 man starups made of the guys who met in college and are trying their luck with blood sweat and tears, however I really do mind them in the nouveau wannabe pseudo startups where it's an open chicken coop for the developer plebs and private offices for the sales/marketing/management.
But it's ok, they give you a ping pong and foosball table. /s
That is exactly the point. When a 16 year old non-male teenager is accepting it, than you won. See iPhone, facebook, whatsapp, tiktok or whatever they are doing nowadays.
I owned two of these devices as a personal information manager (contacts calendar ...). They were not comfortable and the data plans then were so prohibitive expensive that modern usage was far away.
They rejected it because the experience was garbage not because it was open. My least favorite part is constantly manually pruning open apps to fit in tiny memory.
I see this sorta argument frequently but there was a time when only IBM was capable of making PCs. It seems whoever is first to market ends up not having an open platform, which later is superseded by a more open platform.*
See the variety of open operating systems for devices. Ie sailfish, postmarketOS, librephone, etc.
> there was a time when only IBM was capable of making PCs
They may have been the only one's that could use the term "Personal Computer" for a short time, but they were not the only ones making PCs. You had Commodore, Amiga, HP, Wang, Apple, etc. Even still, IBM's PCs were not locked down and were, for the most part, based on an open architecture allowing you to freely upgrade parts. Later down the line (specifically the original IBM PC) when the technology allowed for it, you could even install other operating systems like CP/M-86, UCSD p-System, and MS-DOS.
Right. My point was that "PC" was used exclusively for IBM as a marketing term, but it was not the only "personal computer" available. PC became a generalized term much later, although Apple still held on to PC being a term for any computer that was Windows-based (or simply any computer that wasn't a Mac).
You can buy a Raspberry pi for 30$,and run whatever you want on it. For those in lower income nations this is an absolute game changer.
Phones need to sorta be closed, who wants a null pointer exception to lock the dialer UI in an emergency ?
Of course if you really want to, you can buy one of those open phones and program your own dialer from scratch, not a very good idea for the typical user though
>You can buy a Raspberry pi for 30$,and run whatever you want on it. For those in lower income nations this is an absolute game changer.
Except that decent RaspberyPI configs that could replace PCs cost around $60-70(In EU) and once you add a display, peripherals and a decent microSD card(which is still smaller, slower and less reliable than even a budget SSD) and you're looking at over $100 and for that kind of cash, a second hand X86 notebook(old beaten up Dell/Lenovo with an i5, 8GB RAM and 128GB SSD) gives you way more power and flexibility with what you can boot, install and do with it, it's not even a competition. Not to mention it comes with a built in battery so you can carry it to school unlike a PI, and before someone sniggers and shows me a link with a Raspberry PI based notebook just to contradict me, ask yourselves if you would actually use that as a productivity daily driver for work and play vs a X86 laptop.
Sorry, even in developing nations, second hand notebooks/PCs are a way better deal in every way as a personal computer than a PI which is why PIs have never taken off as PCs and are still stuck in the hacker, tinkerer, learning, embedded, automation niches.
>Phones need to sorta be closed, who wants a null pointer exception to lock the dialer UI in an emergency ?
Why do they need to be closed? Having closed source software is no guarantee of quality nor security.
>Of course if you really want to, you can buy one of those open phones and program your own dialer from scratch, not a very good idea for the typical user though
By wanting more openness we don't mean reinventing the wheel and coding open source dialers from scratch, we mean we need Apple and Google to not lock their hardware to their appstores so we(companies, users, banks, developers and even governments) don't become prisoners to their walled gardens which is basically the current situation.
> and once you add a display, and peripherals you're looking at over $100
I believe the basic idea of the Raspberry Pi is that everyone already has the display (a television), so the only peripherals you need are the power supply, keyboard, mouse, and HDMI cable, which together should be around USD 15 or even less (from a quick search for local prices here then converting to USD).
>I believe the basic idea of the Raspberry Pi is that everyone already has the display (a television)
Sounds like a pretty poor assumption to me. In EU at least, over half the people I know own no television(TV tax is huge here and excluding boomers, nobody watches it anyway because Netflix) and if we assume all families in developing nations own a TV in the household then it would be difficult to do any productive type of work on it as the TV would probably be used a lot for, you know, watching TV and other entertainment rather than having it reserved just as a tool. Good luck writing your homework on your TV while your dad wants to watch his game and your mom her show.
Even in developing nations, second hand PCs are such affordable commodities that they're a no brainer.
It's definitely a thing here in Germany and Austria, and a relatively expensive one at that for just possessing a TV, and I'll bet my hat it's a thing in most EU countries.
Most EU countries you pay the tax regardless of owning a TV or not, it is part of the electricity bill or similar.
I have lived half of my life in Germany and still find it strange having GEZ, or whatever replaced it now, instead of everyone pays approach that most countries on the continent do.
In the case of the UK, the purpose of the Taxes is to pay for the programming that is broadcast over the airwaves, i.e. it funds the BBC.
This is somewhat of a 'compare and contrast' to the US Model, where either stations are funded by advertisements, or the public broadcasting stations that rely primarily on grants, as well as what they can get from the government; but in the case of the US, technically 'everyone' is paying into PBS whether they have a TV or not.
Entertainment is a multi billion dollar industry. Companies like Disney are making record profits. Instead of relying on taxes and fees to make government-approved entertainment why not simply... make content people want to watch?
> Entertainment is a multi billion dollar industry. Companies like Disney are making record profits. Instead of relying on taxes and fees to make government-approved entertainment why not simply... make content people want to watch?
Well, it's a question of which devil you want.
In the case of Disney, much of their profits are driven by an entire lifestyle that they are selling. There's the Media itself, but then all the toys, the amusement parks, etc. And all of the advertising that goes with those, and all of the social impacts it has as a result (see next point for more)
If you go for an advertising based model, You wind up with 2nd or 3rd order social impacts. Even in the 80s, the decade of 'greed is good', the FTC put limitations on advertising in children's programming due to the psychological impacts it had. (I'd be curious to see another good look taken today, but regulators are even more toothless now than they were then.)
In both cases, however, you are at the end of the day going to be beholden to either your advertisers, or your corporate overlords for your content. And if they don't like that content, it will not see the light of day in any meaningful form, regardless of the public interest.
I feel all approaches have their place, but perhaps the advertising model could use a bit more modernization in it's regulation, with what we now understand about behavioral psychology.
>Instead of relying on taxes and fees to make government-approved entertainment why not simply... make content people want to watch?
Because why bother when you get money from free by law through taxes?
Count your blessings, as at least the BBC produces world class content viewed worldwide(former Top Gear, Dr. Who, etc) so you get your money's worth but in Germany, ARF and ZDF receive nearly as much money as the BBC but produce almost nothing worth watching by the Germans, let alone abroad, because why would they give a fuck when than sweet taxpayer money keeps rolling in no strings attached regardless.
They're not so lucky. In Spain the public broadcasters are paid from government grants from general taxation, so any tax payer pays towards it whether or not they have a TV and watch it.
In any case, you're right, most of Europe has TV licenses of some kind or other [1]
Thought notably in the UK, you can certainly own a TV without paying TV license [2]
OK, sure, "we got to the moon with 4Mhz" but try watching youtube, editng photos and google docs with that configuration.
What you probably mean by usable computer is either an embedded system or some ssh server but what most people actually do with computers is access the web and the modern web is so bloated that without fast hardware and proper software it becomes a pain to do anything remotely productive or fun.
That's why the PI is not yet a good mainstream PC. It's too slow sometimes and has too many gotchas that the users who are not tinkerers won't put up with.
YouTube plays fine on my 2020 laptop running Linux. The only feature I miss out on (that the hardware provides) is HDR, but it's a small loss for me to have my preferred development environment.
>Standardized parts and a well defined firmware interface (and thus "standard" bootloaders instead of a multitude like on ARM)
That was true ~15years ago but less so today. Modern PC EUFI bootloaders are bigger and more bloated than any uboot build in existence, and things like device tree support in linux means that you can support various boards with the same kernel (including boards that don't exist yet). Sure you'll probably still need custom bootloaders for most boards, but it's not like you can swap your motherboard BIOS/UEFI chips and expect them to work either. The PC "platform" with hardcoded physical addresses for your serial ports is nice if you're running DOS but it's not really relevant for modern operating systems which have to employ much more complex hardware discovery mechanisms.
It's true that on ARM it's still a bit anarchic and bespoke in practice but that's mainly because there's no reason to push for strong standardization when every vendor is building their own kernels for their own devices anyway.
But the tools do exist (on Linux at least, not sure about the other OSs), if desktop ARM becomes mainstream things could settle fairly quickly. It's less of a technical problem and more of a "why bother?" problem.
> Modern PC EUFI bootloaders are bigger and more bloated than any uboot build in existence, and things like device tree support in linux means that you can support various boards with the same kernel (including boards that don't exist yet). Sure you'll probably still need custom bootloaders for most boards, but it's not like you can swap your motherboard BIOS/UEFI chips and expect them to work either.
This feels like, at best, getting back to where PCs already were, at least as a user. If a device tree can be embedded in firmware and fed to the booting system, then that's fine, it's just a minor implementation detail. If you need to go get a file and put it on your boot media, then that's extra friction. Likewise, I don't really care what the hardware's boot firmware looks like, but if I need a different bootloader per-device then that's a problem - can you imagine if Dell laptops and HP laptops needed different GRUB builds to work?
I feel like we're kind of mixing up a bunch of things here, or maybe I am.
The reason you don't need to mess with GRUB or you BIOS when you change your
hardware is that modern hardware interfaces have built-in discovery and resource
allocation features. Before that you had to deal with pesky low level details
like IRQ allocations and the like (which was a thing on PC not that long ago).
That's not really a feature of "PC" per se, it's a feature of USB, PCI etc...
That stuff works exactly the same on ARM or anywhere else.
The things that need to be described in the device tree are generally internal
components of the SoC and motherboard, things like IO expanders, SPI interfaces,
serial interfaces, and of course the USB/PCI/... controllers themselves.
I think there maybe some confusion because of the term "bootloader". It's true
that both u-boot and GRUB are bootloaders, but bootloaders in the ARM world have
a lot more to do than PC bootloaders, they're effectively the equivalent of UEFI
+ GRUB, not just GRUB. You can write a simplistic boot sector loader for PC in a few hundred ASM opcodes, that would be a lot more difficult for the average ARM bootloader because there's a lot more to do before you can just call into Linux. In general you won't even have RAM or caches available when u-boot begins executing.
And I think it's actually a better design, because it means that proprietary
built-in firmware crap is usually kept to a minimum on ARM: the BOOTROM usually
just initializes the bare minimum to be able to load u-boot (or some other
loader) into on-chip SRAM and then it's over. Of course that means that the loader is
significantly more complex, but that complexity has to be some place or an
other.
The practical upshot is: On PC, I can take a single disk with a single GRUB version and boot it on an arbitrary machine. If ARM machines can do that with their own UEFI/ACPI/ServerReady setup, then great. But if I need to alter that disk for each machine I want to use it on (inject board.dtb, add a custom u-boot, whatever), then that's not okay. I would happily accept u-boot and whatever else being on an onboard flash chip that I could update but didn't need to care about otherwise - that would get us back to "your PC boots via a UEFI system that's a whole OS unto itself, but it chainloads GRUB via standard interface so who cares".
We completely agree here. If ARM desktop becomes mainstream there will probably be some kind of abstraction layer needed at this level that doesn't really exist at this moment. I'm fairly optimistic that it will happen when it's really needed though.
Ah, okay, yeah. And honestly, the tech may already exist - I'm a little unclear on whether it's sufficient or not, but ARM UEFI is a thing - but I definitely look forward to seeing things standardize and have boards that actually work like that without extra hacks.
> You can write a simplistic boot sector loader for PC in a few hundred ASM opcodes, that would be a lot more difficult for the average ARM bootloader because there's a lot more to do before you can just call into Linux. In general you won't even have RAM or caches available when u-boot begins executing.
This is exactly the point OP's getting at though, and the exact "discovery" features you handwave away. If I write an OS (or bootloader) on x86 w/ BIOS/uEFI, I can easily query available memory and start using it. Meanwhile u-boot has to be told via an external resource how much RAM is available before it begins initializing. This is fine for embedded/bespoke development (since you'll know the hardware you're developing for and can make assumptions), but a pain for general OS development where you have to inform the bootloader/OS of every devices capabilities.
In the 1980s, the IBM PC standardized the entire industry around ISA ("Industry Standard Architecture") which was pushed forward by the manufacturers of IBM-compatible hardware ("clones").
The consortium of like-minded manufacturers later went to help develop develop all the subsequent variants that persist to this day (PCI and ATX form-factor etc). The standardization benefited them greatly.
Now that ARM Holdings is owned by NVidia I hope
Jensen Huang sees that if ARM is to take over the desktop it makes sense to bring that same level of standardization that allowed the PC industry (and NVidia) to dominate.
I think ARM Holdings needs to be the one to spearhead the initiative. It will help industry-wide adoption of ARM (for desktops, laptops and servers) which will benefit them greatly.
> if desktop ARM becomes mainstream things could settle fairly quickly. It's less of a technical problem and more of a "why bother?" problem.
This is exactly right, but I fear the incentives have changed and we'll never again see something like the x86/IBM-compatible/Wintel world of "standard" inter-compatible systems.
As you say, it's a "why bother?" problem, and no manufacturer will want to bother: Apple has no interest in making it easy to run other OSes, Microsoft wants to sell you "Windows Surface" devices, not general-purpose computers, corporate buyers want a "secure" locked-down environment with a signed bootloader...
Are there any powerful stakeholders who benefit from interoperability?
>Apple has no interest in making it easy to run other OSes
I'm not convinced that this is necessarily true. They advertised running Linux VMs during their presentation, they made a tool specifically for installing Windows on Macs and they even recently stated how Windows on ARM Macs is really up to Microsoft. If you could run Windows on an M1 Mac, it'd really turn it into a powerful and power-efficient "do-it-all" machine, since you could still boot into Windows and run that niche Windows software that you need from time to time.
Of course but obviously they don't and probably will never allow you to replace MacOS, rather just to run something else on top of it in one form or another.
Notably the PC standard meant things were interchangeable, not "interlayerable". You picked a CPU, you picked an OS, you did not have to emulate or virtualize whatever you wanted on top of what was imposed on you. Imagine that 30 years ago someone pitched a PC that only runs a specific OS and the only way to run anything else is on top of that like an app. You'd think "ridiculous".
Today having a device that runs as sealed monolith both HW and SW wise, and all you can do is run stuff on top of that without the ability to replace anything is considered a good sign. Tells you where we're going.
Untrue, Apple does permit you to replace your kernel on Apple Silicon macs. Hector Martin has a Patreon where you can sponsor and follow his work towards making Linux usable on these systems: https://www.patreon.com/marcan
That's great, let's wait and see when this actually happens and if it turns into a compelling alternative. I think Apple hit it out of the park with their ARM line so I'd love to use the hardware with an OS of my choosing, like I would historically on a standard PC.
Yeah - I'm backing the patreon and very excited for this work. I don't need it often, but when I do, being able to boot natively into Linux is tremendously useful.
Friend, the comment I replied to referenced Linux in VMs, and my point is clear and was made in good faith: Today I cannot reasonably run any Apple device without the Apple base software.
Your answer is dismissive and misses the point because you nitpicked on one particular word I used that doesn't fulfill one particular and currently irrelevant scenario: Windows in Bootcamp. What about every other scenario?
That's not really because Apple hates allowing you to install Windows, but rather because you can't actually acquire a Windows for ARM license. Once again, Apple has stated that Bootcamp for Apple Silicon is "up to Microsoft". And ARM Macs are still just a minority of the computers that Apple sells. The rest have Bootcamp, which allows you to just turn your Mac into a Windows machine.
Saying that "Apple will never allow you to replace macOS" is just plain wrong when they've been allowing you to do that for the past 14 years.
You might as well argue how Bootcamp doesn't run on iPads.
> but rather because you can't actually acquire a Windows for ARM license
You'd still have to develop all the device drivers required for this custom SoC. After all, Linux doesn't need a license but without extensive documentation and support from Apple it's like saying you're free to challenge your boss openly, it's pedantically true, but it is impossibly impractical for most.
Unless Apple provides extensible documentation for 3rd party drivers to be developed for their SoCs it's just cheap talk from their side.
> You might as well argue how Bootcamp doesn't run on iPads.
This isn't much of a "gotcha" - I do think it's outrageous that Apple gets away with selling these gimped iOS devices that only run Apple-approved software.
Would consumers stand for this with any other devices? A stapler where you had to buy staples from the manufacturer? A car that only drives on manufacturer-approved roads? A dishwasher that will only wash approved plates?
Microsoft selling Windows to DELL, HP, ... they will not start writing their own operating system. And at that point the same effect will start like it did with Microsoft/Intel/computer manufacturer.
If Microsoft however succeed with their Surfaces like Apples does .. than maybe we are screwed. Luckily, they do not.
They do actually. Outside Apple heavens like US, Windows tablets and 2-1 laptops are the to go option, as Google keeps failing to move the Android ecosystem beyond phone apps on bigger screen.
I hope that any switch to ARM retains a degree of the flexibility available with the PC but it's important to recognise that
1. x86 probably isn't going away anytime soon and
2. the PC world has come with its own set of constraints - notably being locked-in to (to all practical purposes) a duopoly of CPU manufacturers with the resultant impact on pricing and customer choice.
Yeah but the duopoly is at least committed to opensource for most of its components. I can download opensource drivers for the GPUs (provided by Intel and AMD) and network components (Intel) as well as most-if-not all of the components on my current motherboard.
As far as I know, ARM as a technology has little to say one way or another about closed vs open systems.
As for the broader trend towards closed: open standards naturally flourished in a world where one company wasn't huge enough to do everything themselves (except maybe IBM, but they dropped the ball to everyone else's benefit). Companies were forced to cooperate if they wanted to succeed.
That's no longer the case, and that's why open standards are dying. Just one of the many ramifications of corporate consolidation. Just one more reason for megacorps to be broken-up.
All of those things developed out of necessity by people who either needed the functionality, or didn't want to pay someone else for it. The demand was strong for open standards because the supply for software couldn't keep pace with demand. Open source didn't really eat away at large software companies for a long time because those customers were never going to buy that product to begin with.
Now that computing has been commoditized to the point where just executing code is a commodity itself it would appear that the supply of software has outpaced the demand for software. Open-source is now eating away at somebody's lunch, and will be dealt with in a competitive nature instead of a passive one.
The xbox and the iPhone are the security models I expect on almost all mainstream computing devices soon. I estimate that it will be impossible or near impossible to execute any code that was not approved by the hardware vendor, and obtained by the hardware owner IDing themselves to the code distribution/signing service ("stores").
Anonymous software publishing or use/consumption will be relegated solely to what will, by then, be ancient and slow hardware, and relatively dangerous (for non-experts) OSes.
Modifying the signed apps or OS to remove their phone home and spying will be impossible, like iOS is today.
There is presently very little stopping large vendors from implementing this across the board, and, indeed, consumer demand for malware-free (but not vendor spyware-free) platforms is driving this in a big way.
Already every mac available for purchase, even the Intel ones, requires online activation to wipe and reinstall the storage:
I imagine this will happen regardless of ISA; see what happened with the intel ME and related DRM, and Pixelbooks/T2 macs.
Most people buying computers don't want flexibility, they want reliability. OS alternatives, software alternatives, and privacy aren't really part of the buying decision outside of a small and mostly commercially irrelevant market segment.
The e-waste will be colossal when the vendors just decide to EoL them. You won't be able to reinstall the OS on any current Apple product (phone, tablet, or computer) once Apple's activation servers for those devices go offline in 10-20 years.
> Most people buying computers don't want flexibility, they want reliability
I think this is a correct statement, for the reason that what was once a enthusiast activity (using computers) becomes a necessity, and tolerance of problems reduces drastically when your life/salary depends on it working flawlessly.
Imagine saying these things about a dishwasher? Who’d care if the circuitry driving the buttons controlling the pumps etc is proprietary and e waste when it’s dead.
The dishwasher doesn't connect to the internet, doesn't have all my passwords and emails, doesn't access my bank accounts and can't spy on me. This is why I don't care how it works.
But this will affect the future programmers and tinkerers, too. It's as if they are no longer wanted. "But, " you could argue, "hobbyists and enthusiasts can buy specialized hardware". Well, many if not most of the generation of enthusiasts that made the computing world of today possible started with general purpose hardware, not special hardware. They took the common hardware of their day and built stuff with it, doing things that were often not entirely foreseen by the makers of said hardware. And they didn't ask for permission.
If the world of tomorrow consists of most people using gadgets like iphones and ipads, and in order to tinker you need special-purpose hardware, I think we're shooting future kids in their collective feet.
You’re probably right. That said what is an example of something you can’t do note that you could do with more open HW, at a personal level. I get that distributing apps at scale isn’t doable without going through some store . But at a personal level what are the types of things that tinkerers can’t do with a Mac or Arm based PC?
They are not appliances. When I buy an appliance I expect to know by the marketing brochures in advance exactly what it will do, and I expect it to do exactly that until it wears out. When I buy a phone or game system I expect to install new functions on it: things not even thought about by the manufacture.
consumer demand for malware-free (but not vendor spyware-free) platforms is driving this in a big way
The thing is, you can eat your cake and have it too. Various distributions (e.g. Ubuntu and Fedora) already use secure boot to verify the bootloader - kernel - module chain. There is no reason that this couldn't be extended to e.g. only running signed Flatpaks too. And yet, at the same time, you can boot these distributions without secure boot, load unsigned kernel modules, etc.
It is true that a combination of code signing and sandboxing provides a lot of additional protections. But a less consumer-hostile flavor could be where by default the system is restrictive, but the user always has the power to disable certain or all security mechanisms.
The problem is that it is not only about security and protecting against malware for these vendors. It's also about funneling applications and subscriptions through their app stores, so that they can take their 30% cuts. They don't just want to profit from the initial sale, but also the continued use of a system. The security argument is just used to trap users in their app store ecosystems.
What makes this more unfortunate is that a large contingent of the Linux community is opposed to better security measures by default. This is why many Linux distributions are still stuck in a 90ies threat model where gaining UID 0 access is what one should protect against, while applications can freely roam in users' home directories. Consequently, there are only few Linux distributions that are competitive to iOS or macOS when it comes to desktop security.
> Thefts of smarphones were rising very rapidly but then diminished
Did it really? In absence of official statistics, I think it just got normalized. When everyone has a monthly contract with insurance, it's not a tragedy if the phone gets stolen: you make a call, they send you a new phone, life goes on.
In the UK at least, phones do get stolen all over the place, it's just not reported anymore because it's simply a fact of life.
” Cell phone robberies and thefts from persons went down 50 percent in the first two years after the kill switch became mandatory on phones. In the past two years they fell again by another 8 percent, to a total of 1,754 last year.” 2018
Not only that. Perhaps most importantly it curbed the growth rate.
Maybe the iPhone model? But definitely not the Xbox one.
In the newest console, every part of the console is designed not to trust each other. IIRC, even the CPU instructions are encrypted.
This is in order to assure game developers that their games will not be pirated -- but I suspect it might be paying quite a large performance price. I don't think Microsoft or Apple need that much distrust of their own packages in regular computing.
The anti-piracy drive is the main one, IMO. The money from services far outweighs the money from the sale of the hardware, in the long run for most types of users on most devices.
It's my understanding that disabling system integrity protection (SIP) on M1 macs immediately disables its new ability to run iOS apps from
the iOS App Store, for precisely this reason.
As for the hardware protections, already we have iPhones doing this with their cameras and screens and fingerprint readers and the like, ostensibly for security but who knows if it's just to kill the third party repair market.
I don't think I'm telling any tales out of school when I say that Apple would prefer nobody ever buy a formerly-broken, repaired, secondhand iPhone or iPad.
"The money from services far outweighs the money from the sale of the hardware, in the long run for most types of users on most devices."
I believe that this is true for the USA market but not true for many markets worldwide. From the perspective of the developing world (containing most of the users and most of the devices, but not most of the revenue), the market is very different, much less relevance of iOS/Apple Store, much higher preference for free apps instead of service subscriptions, etc.
It's not just Apple that's shipping AMD and Intel chips with a special security coprocessor (ME/PSP) embedded in the CPU that only boots signed, unmodifiable code: everyone in that marketshare table is doing so.
What OS does the main CPU run? Usually the one from Microsoft, the vendor with specific state-of-the-art expertise in locking down the platforms on which they operate stores (xbox).
Apple is simply ahead of the curve here; Microsoft (and the entirety of the PC market they run on) will get there soon enough.
I hope you're right and I'm totally wrong, but everything I see with regards to platform stores, DRM, and "cryptographic integrity of software" as it's termed leads me to believe that in a relatively short period of time (10 years, +/-5) basically nothing cheap and performant off the shelf will run open OSes or common software without significant difficulty.
> Already every mac available for purchase, even the Intel ones, requires online activation to wipe and reinstall the storage:
I was under the impression that activation can be circumvented by setting the secure boot mode to something less than "maximum security", at least on Intel Macs. Is that a different mechanism then?
You are confusing signed boot security with the activation of the T2 chip.
The T2/M1 still needs activation data before it will do anything (like let you use the touchbar or touchid or secure enclave or even talk to the SSD) after a disk wipe.
This [online activation] is done after the recovery kernel has already booted from
the USB install media. Signed boot and T2/M1 activation are different things entirely.
This is the kind of comment dictators love. Crushed spirits. It's a cold, emotionless, frank extrapolation to a future without hope. You've already given up. Maybe you can drag a few more down with you.
My comment didn't suggest if this future is good or bad, or if I am pleased or dismayed by the state of affairs. I stuck to the facts
and my own predictions about future facts and left my emotional opinions out of it.
Perhaps these sort of closed systems are crushing to you, as you have correctly assumed that they are to me. They aren't to most people: the iPhone is wildly popular. Many people will be very happy to get new M1 macs, xboxes, and iPhones this christmas.
Most people prefer the conveniences of central management, having never personally experienced the terrifying and violent failure modes of ubiquitous surveillance and censorship.
I believe most people like what they're told to like. I think a small number of taste makers can change the future drastically and you never know how your words will influence people. If you ever want it to be different, you and I and everyone who cares have to simply say that it is different.
If you want it to be different, you have to ignore them and get people jazzed about a more open future. Eventually one of us will either become famous enough for us to do something about it, or inspire someone else.
> Most people prefer the conveniences of central management
To be fair, it's a pretty broad stretch to compare buying an xbox to "terrifying and violent failure modes of ubiquitous surveillance and censorship".
The console experience is easy to understand for non-technical folk, is vertically integrated (you buy a headset with the correct label and you can voice chat with your friends, you know you have the prerequisite account to play X game on your platform), is fully usable with a controller and designed from the ground up for a tv.
The gaming PC experience is... far worse. If you buy a $1000 gaming PC, you're going to need to install steam/epic/gog/xbox/origin/ea to play the top 5 games of the year, deal with awful audio issues, masses of account systems with varying levels of "party" support, hardware/driver incompatibilities, background software issues (antivirus, etc), varied levels of online play/party support, mixed support for controller/kbm, the list goes on. Networking issues? Hope you don't accidentally have a double NAT, or you might run into [0]. That's before touching on the actual open way of running proton on linux (You have an Nvidia card? tough. Have an AMD card? that's great, just compile your own mesa).
It's tempting to distill this down to "convenience", but it's much more than that. For many people (including myself, a professional programmer) that convenience is the difference between gaming being a way I want to spend my time or not. I'm not choosing between an open and closed device, I'm choosing between a closed device or no device.
I'm comparing the future of all commonly available computing devices (phones, PCs, consoles, et c) following the xbox model to terrifying and violent failure modes of ubiquitous surveillance and censorship.
When all systems available follow only these cryptographically enforced rules, your tools will make it impossible for you to do basic human life tasks without being censored and surveilled.
I encourage you to study what happens to societies that go through that.
And what is the problem with that? It's a salient example.
Making sure that Apple is represented in an equal and positive light should be a small concern compared to these consumer device privacy issues that affect all vendors.
I am hoping that given how open some ARM archs are that now it will be within hobbyist reach to create even their own motherboard, send to one of the fabs and assemble then run full-fledged Windows on it. This looks like a dream come true!
You could definitely do that now and run Linux, but given that some software doesn't exist on Linux, this is very exciting.
> That's a 100% inevitability. Every new CPU architecture incorporates an entire set of new restrictions under the guise of security and privacy.
Under the guise of security: for sure. But where are the restrictions under the guise of privacy? In my observation, the trend rather moves away from privacy.
> Standardized parts and a well defined firmware interface (and thus "standard" bootloaders instead of a multitude like on ARM)
UEFI comes with some ARM SoCs and uptake is increasing. U-boot will still be popular on the embedded SoCs. With standards like SBSA and SBBR, you can run a generic unmodified arm64 operating system.
Same here. The people saying hobbyists can still buy open hardware are missing the point - that most become hobbyists precisely because they can tinker with the commodity hardware.
"Current PCs can last for decades" this might not sound as an advantage for PC computer producers. Surely they would prefer mobile devices model in which many people switch device every 2 years.
I bet there are some higher ups in Dell, HP counting how wonderful it would be to force people to switch to the new model more often. Or maybe even sell hardware subscription.
This would be terrible for the environment, but who cares (never heard Tim Cook or Bill Gates going on the stage and saying, hey, we have a new fancy model (i-whatever or Surface), but your 2 years old one is also great, so keep using it, this would be good for the environment).
Tim Cook on an earnings call, explaining that reducing the price of battery refurbishment to $29, lead to fewer sales as people held on to devices for longer:
>According to Cook, while analysts suggested Apple shouldn't do it, the company "strongly believes it was the right thing to do for [its] customers."
>“We also make sure to design durable products, that last as long as possible,” Lisa Jackson, Apple’s vice president of environment, said. “That means long-lasting hardware, coupled with our amazing software. Because they last longer, you can keep using them. And keeping using them is the best thing for the planet.”
Maybe you have good reasons for disliking Apple. Maybe you have fair criticisms of the company, that's fine. However on environmentalism Greenpeace are the real deal. They put in the hard work of checking the details, and they absolutely know the difference between companies doing low effort environmental PR and those putting in the hard work to actually make a difference. Out of all the mainstream tech companies they rate Apple number one. If environmental concerns are a significant factor in your tech purchases, there is one clear best option and it's not even close. Now if there are other factors that matter to you more, that's fine. Your choice.
I think the bigger story here is that computing based on ARM64 has been, and continues to, surround the traditional x86-64 territory. ARM dominates phones, and is popping up in server environments as well, where it is cheaper to run for many workloads than X86-64. We have migrated to ARM based Gravitron processors to deliver our services at my company, and we have been able to reduce our AWS EC2 costs by 40% or so, with almost the same performance. The M1 based Macs certainly show the potential on the desktop. Other vendors will produce better, more competitive desktop power level chips, too, that are completely adequate for desktop computing experiences.
We are seeing the classic Innovator's Dilemma pattern whereby the lesser technology slowly overtakes the more brittle incumbent tech. x86 did the same with minicomputers, and then mainframe type workloads.
ARM instances are usually 10% cheaper than their equivalent instance.
How exactly did you achieve such a dramatic reduction in cost? The graviton instances aren't faster than the Intel/AMD options, so you need at least the same number of instances unless something else changed.
ARM in this case is the underdog, attacking the incumbent x86. It is the lesser tech because it started from "below" (lower value niches not taken by the incumbent which prefers higher profit margins).
2. RISC vs CISC has never been settled. Until Apple (and Amazon super recently) produced their super recent architectures based on ARM, x86 was considered superior. Yes, RISC was theoretically better in the 80s and later 90s, but x86 has been reinvented successfully 2 times (micro-ops and x86_64) and scaled from 0.2MHz to 5Ghz. It's also a mix of CISC and RISC, it's not pure CISC now. Same for ARM, they've added various instructions which bring it closer to CISC.
Another major difference is the memory model. In X86 other CPU’s must always see the writes of a core exactly in the right order. This limits the ability to reorder store ops significantly. ARM requires a memory barrier for this. This is a major reason why X86 emulation is so slow. One must basically issue a memory barrier after every store op.
M1 actually also implements the X86 memory model too in HW. It’s only usable for the rosetta applications and comes with perhaps 20% perf penalty. But it’s still way better than emulating it with barriers.
In C++ terms it pretty much means X86 is always seq_cst. With ARM one can actually get benefit of the different memory model options. As an example one can do an atomic access without having to flush the whole store buffer out, which is impossible in X86.
Due to the instruction coding and memory model for multicore I don’t really see X86 dominating anymore in the upcoming decades.
And as the modern OoO cores are so similar internally it’s not even a big deal in the end. AMD shouldn’t have any issues with producing a Zen arm core. Switch the inst decoder and that’s pretty much it (ton of design work for sure). Keep the X86 mem model optional for emulation and binary translation can be almost thought as just making X86 instructions into fixed width ahead of time.
I am trying to wrap my head around whether ARM's looser memory model is a fundamental performance advantage or not.
I had always assumed that the looser memory model must have a performance benefit. But this comment from last week argues that it doesn't really buy that much, and that a bigger buffer can eliminate most of the difference: https://news.ycombinator.com/item?id=25263461
If TSO forces flushing of store buffers for every atomic access, that seems like a substantial disadvantage for x86.
It has to flush them. Because if another core sees the result of the atomic op it must also see everything else that the other core wrote before the op. While it can indeed first see no writes and then suddenly all it can never see just the atomic op and not the previous writes.
Without that the store buffers can be kept unflushed to, as an example, see if one can get a full cacheline or whatnot and only flush then.
The comment is correct that an X86 with heavy reordering backend will beat arm without one. However arm with one does handily beat X86 with one. Case in point: M1
Large part yes. But not The reason. It’s fast because of many things like that. TSO doesn’t affect single core perf much so it’s not really a factor there, and yet it’s blazingly fast. However the multicore perf is really great too.
I haven’t verified the exact numbers myself. And it will depend on the exact thing you’re running. It’s just on the order of low tens of percents.
TSO cannot be enabled outside of rosetta as it’s not exactly a good arm extension. Perhaps you could do some trickery but Apple likely prevents that.
However you can test it by making something where you know rosetta generates comparable arm assembly from the X86 one and just run comparison that way. Some sort of parallel lockfree algorithm would be the best candidate.
TSO is possible to enable outside of Rosetta with some shenanigans in the kernel. Unfortunately getting Rosetta to generate code that is comparable with what a compiler would create is quite difficult: it needs to lift x86 into its own IR and then re-do register allocation, which it is quite good at but obviously not perfect.
I haven't done exact measurements, but I don't think the cost of enabling TSO is anywhere near as high as 20%. On the contrary, I don't think I have noticed a real difference; perhaps it is but a couple percent slower.
Just want to add one thing, x86 has stronger memory “semantics”. So, it doesn’t have to work that way behind the scenes, just at the end of the block, it has to appear it worked that way. So, x86 does reordering, store combining etc a lot. IMHO, performance difference between arm vs x86 barely related with ISA, in M1 case, it’s definitely not, a lot more going on than just taking advantage of weaker memory model.
Having to appear worked that way does cause restrictions in multiprocessor case. ARM chips naturally do all of that too, with the memory model simply giving them way more freedom to reorder things.
One couldn’t do X86 version of M1, mostly because there is no way of making an instruction decoder that wide for it.
And the performance penalty of M1 when working in TSO mode strongly implies that yes the weaker memory model indeed plays a major role. Not the biggest, but definitely not insignificant. Tens of percents here and tens of percents there combined become a ridiculous perf boost.
I think "RISC" was, for a stretch of time, "superior", but it was always in very expensive and very niche workstations and servers.
x86 was itself a kind of underdog in that space, losing out in sheer performance and bandwidth to the high-end RISC market, but making up for it by, you know, being cheap enough that "regular people" could go and buy a family computer with an x86 processor.
Intel managed to slowly improve the various deficiencies over time, and produced a processor not only affordable, but also superior to the incumbent RISC chips in all but niche areas.
I think more than the ISA, Apple's M1 manages to be so good at its job for similar reasons to the old RISC chips -- it's built from the ground-up with a pretty specific target application, rather unlike x86, which has to be all things to all people, with legacy support and scaling from laptops to quad socket 3U servers.
The reinvention that gave x86 the advantage over (workstation) RISC was OoO execution in the Pentium.
It’s a bit silly to think of x86 and ARM as being so different these days. Most x86 code looks a lot like it was produced for a RISC chip and ARM has been gaining some more complex instructions and addressing modes. Academics may have felt that RISC was better than CISC for a long time but I don’t think they were predicting the world of today so much as they were incorrectly predicting the near past. If RISC were so much better then we’d have lots of workstations running on modern Alpha or PA-RISC systems. But we don’t see that.
Ah, yeah, forgot to mention Out of Order Execution.
> Academics may have felt that RISC was better than CISC for a long time but I don’t think they were predicting the world of today so much as they were incorrectly predicting the near past.
Plus you know, academics have been known to be wrong. That's why there's even a saying for it: science advances, one funeral at a time. People are emotional and get attached to their pet theories. There are many examples of this.
It's going to be interesting to see how CPU/GPU tech advances the next few years.
It was the PPro and OOO processors were being developed by lots of companies and teams at that time. Wikipedia indicates it was featured in the PowerPC 601 (93), SPARC64 (95), PPro (95), MIPS R10000 (96), etc.
Hard to see where was the advantage over workstations on that point...
Yeah CISC vs RISC thing barely makes sens anymore. It could only matter in the context of hand-programmed and low frequency processors. What remains of "it" (at least in some people's mind) that is still relevant today on major ISA is clearly instruction encoding, but you could totally make a CISC with fixed length instruction. You could also make a RISC with (highly) variable length but... just why? And actually why make a highly variable length ISA at all regardless of if the ISA is RISC or CISC? The real reason that x86 has an highly variable length is only historical. When you only decoded one at a time it did not matter that much. So maybe you wanted some big ones for convenience but it would be a shame to make them all big. So the 8086 had instructions of 1 to 6 bytes (up to 10 with prefix?). And then you had 32-bits with 16-bits compat, it seems it went up to 15 bytes, and then stayed at 15 max for AMD64...
Today the most used x86 instructions are not that much different from ARM ones, and in a good number of cases actually even simpler. The simplest way to compare is to simply look at the assembly an optimizing native compiler emits, and lookup for what the emitted instructions are doing.
The micro-ops of processor are actually quite dependent on the ISA. You could do neutral micro-ops, but I doubt this would be very efficient. So you can't really compare the complexity of ISA by the number of micro-ops issued, because among different arch and microarch the micro-ops are themselves more complex on some point and less on others, with tons of similarities with (core instruction set of) the ISAs they implement.
Complex instruction for backward compat, special, or intrinsically difficult operations, are transformed to a potentially very high number of micro-ops, often looked-up in a ROM.
What aspect added to x86 is RISC like? It sounds a bit like an oxymoron. If you start with a complex instruction set, you cannot make it less complex by adding new instructions.
Most chips are "microcoded", at least here and there. The term does not define a whole micro-architecture, esp in modern complex ones, but I'm not really sure how you would implement old school CISC chips without microcode concepts. The 8086 was microcoded: https://www.reenigne.org/blog/8086-microcode-disassembled/. It basically means that "you" program some low level internal details of a chip, e.g. the muxes connecting various buses to execution units. Most of the time the "you" is only the chip designers, and it is stored mostly in a ROM. Sometimes microcoding is accessible by actual software programmers, but it is quite rare.
And modern u-ops are not necessarily like what was done at the time of old-school microcode.
That's two completely different things. Every x86 instruction is decomposed into one or several micro-ops that are executed. Hardly any x86 instructions are microcoded, which means that the decoder has to start streaming micro-ops from the microcode ROM.
This is a dynamic evaluation. Right now, ARM is a far superior technology to x86 for my smartphone use cases, and x86 is a massively superior technology for my high-detail PC gaming use cases.
(It's split for things like development, now that the M1 is here. Personally I get my development done with 8-core 4+ Ghz x86 chips in Windows/WSL2/Docker, but others are getting development done with Apple Silicon M1 ARM chips!)
Over time the "lesser" technology can become the greater technology for more use cases.
> Is that latter case simply due to the availability of high performance GPU cards?
It's a combination of things:
* Apple has not indicated that they'll allow for third-party GPUs
* Apple has not pushed for widespread gaming compatibility
* Game developers prefer to reach the widest audience
Apple would need the M1 successors superior in gaming computation and capable of marrying up to high-performance GPUs, write the drivers necessary AND get broad adoption of Apple gaming to factor into game publishing.
This is all theoretically possible, but it didn't happen while Apple was using roughly the same high-end hardware available to PC gamers, so it's questionable (in my mind) that it'll happen when they are locking down their hardware further, and trying to use all their own hardware for gaming.
The first ARM chips were blazingly fast and made their competitors performance look anaemic - they have been around since the mid-1980s. The real difference is they went the low-power route while retaining as much performance as possible, rather than depending on desktop power supplies and industrial cooling.
One of the reasons this new chip is so performant is the "headroom" that this approach has given them.
Apple are the survivors of the vertical integrated home computers, and they only managed to survive by reverse acquisition of NeXT, and diversifying outside of the desktop market.
One of the three major consoles is ARM based and one of the best selling systems of all time. From last gen, the PS Vita was ARM based and considered high end for mobile graphics at the time. Apple has already crushed Intel in on chip graphics and Nvidia is heavily invested in ARM.
I don’t think ARM and graphics will long be known for poor performance.
As a fan of AMD, I hope they see the writing on the wall and are planning accordingly. It would not be surprising to see the successors to the PS5 and Xbox Series running on ARM.
The complaint about ARM as I understood it was that they couldn't match Intel performance. The commentary around Apple seems to be that over the decade-plus where they iterated for iPhone and iPad, it's caught up
According to Wikipedia (https://en.wikipedia.org/wiki/X86-64#History), AMD's x86-64 was announced in 1999 and the full specification was released in 2000. Assuming patents last for 20 years, they should have all expired by now. For SIMD, this includes everything up to SSE2; most software which uses any of the newer SIMD extensions should have a fallback to SSE2, since older CPUs don't have these newer extensions.
AH! Thank You. The timeline make sense. Seems Apple have time this perfectly for their transition. I am now giving them some creditable excuses for the negligence of Mac in the past few years.
> Seems Apple have time this perfectly for their transition
Important to note that this doesn't really apply to Apple Silicon. Rosetta II isn't X86 emulation, it's a translation layer - big technical difference.
Especially since that kind of legal aspect highly depend on how much companies are willing to fight, and the state of mind of the judges. And in a not so distant past, Intel warned MS and PC vendors doing some ARM stuff that it might be willing to fight. As for the state of mind of the judges, it is difficult to predict...
But it can be different. Depends on the magic formulas you use for your claims, and whether you are actually permitted to patent software in the first place (with or without other magic formulas usually stating that said software is executed by some hardware).
And in places where it is not supposed to be possible to patent software, patent offices are usually nevertheless accepting them, because it gives them money so why not? :P
TLDR it is a mess. Depends on an insane nmuber of contextual factors, and as unpredictable as the outcome of Oracle vs. Google before the verdict. Or maybe even more unpredictable.
No, it's patent-law, not copyright law. Interestingly while the US has Fair Use (which is an open-ended legal defence), I don't think any countries in the EU (or "Western Europe" if Brexit happens) have anything beyond the closed and enumerated "Fair Dealing" defence).
That was true for a long time, but to quote the Government of the Netherlands (the closest thing I can find to an official source) [0]:
>The United Kingdom (UK) left the European Union (EU) on 31 January 2020. A transition period is now in place until 31 December 2020.
As far as I can tell from my distant vantage point on the west coast of North America, there is very little doubt left. Brexit has already happened and most of the fallout will be settled by the end of this month. I supposed you could say that it's still in a constant state of happening, but it's well past the point of "not happening" as far as I can tell.
We'll see what happens. I mean, it's possible that they actually end up going through with a complete exit on Dec 31. My guess, however, is that there will be ongoing negotiations for a long time to come. The UK is likely to be out but only in the sense that they don't now get to vote on EU resolutions. They'll likely make negotiations with the EU which end up putting them right back where they were (regulation wise).
That's my expectation at least. They may feel the full pain of a full exit on Dec 31, I doubt they'll want to continue in that state for very long.
There's also the possibility of yet another extension on the transition period.
The UK ceased being in the EU in January, and transitioned into a temporary EEA membership, subject to most EU law. That comes to an end in a few days; at that point what happens next depends on the outcome of negotiations. There's certainly a world in which EU patent rules continue to apply to the UK; in fact, that would probably have been the default assumption until the British government went off the deep end.
So, if you use 'Brexit' to mean purely the legalistic act of leaving the EU, then yes, it's happened. However, effectively none of the _effects_ of Brexit have happened yet.
This is because very little uses AVX instructions, hasn't it always been better to optimise software for the GPU rather than AVX instructions, even though they are not perfectly interchangeable.
You might be thinking of the most recent AVX-512 extensions which had significant power management complications delaying adoption. Many common programs have optimized SIMD paths because GPU computing is significantly more complicated and the setup overhead makes it slower for uses which don’t process tons of data - one of the interesting points about the M1 is that the unified memory design avoids the overhead of copying data between the CPU and GPU.
No problem - Intel’s marketing hasn’t helped at all. I notice this mostly for cryptography- we do a lot of data validation so spending up SHA is a nice win.
Games, pretty much all of the AAA ones. ~10 years ago you would read cries of AMD x64 users not able to play SSE3 games. 2-5 years ago situation was repeating with AMD Phenom I/II owners and SSSE3/SSE4.
Now we need an ARM processor that can compete with M1. Seems that Apple caught everyone with the pants down in the PC industry. Microsoft SQ2 processor that ships in their $1400 Surface Pro X is almost 50% slower than M1.
It would be hard to argue that the PC world won't at least make some attempt at shifting to ARM. Apple has demonstrated far too large of a performance and power efficiency savings to not try. Of course, even if that becomes successful, x86 will be available for decades.
Right now, ARM Macs are going to just be lost for any kind of gaming (except Catalyst versions of mobile games). This is probably ridiculous, but my dream is that the M-series chips become so compelling over the other options that the NEXT nextgen consoles would adopt it. Then we could get back to having at least a few of the biggies getting ported to Macs. But I can't even get my hands on the current new-gen PS, so this is a looong way off, even in the best case.
> ARM Macs are going to just be lost for any kind of gaming
Every game I've tried with a Mac port runs great on the M1 under Rosetta. I've sunk many hours into Factorio and Offworld Trading Company on my M1 mini, and if I gave you the same game running side by side on an Intel mac, I'd bet you could't tell which was which without quitting and checking the system profiler.
If you meant triple-A gaming, then yeah, without a sizable increase in mac marketshare, it's probably lost for the foreseeable future.
Macs in general are terrible for triple-A gaming. I doubt any Mac in existence can run a Bootcamp's version of Cyberpunk 2077, except maybe a Mac Pro with a top-end GPU, or maybe an eGPU would work, assuming you could get a top-end AMD or Nvidia eGPU.
On top of that, there's a lack of raw mouse which Apple hasn't fixed despite over a decade of people begging. This really fubar's FPS games where absolute control over sensitivity, acceleration curves are crucial for mechanics (e.g. countering recoil and flick shots in Counter-Strike)
It's no Cyberpunk 2077, but I recently played through Control on medium-ish detail on the AMD GPU on my 4 year old Macbook Pro (in Bootcamp, obviously). It managed around ~50 FPS most of the time, only very occasionally dropping a little below 30 when there was a lot going on. I would try Cyberpunk 2077 on it out of curiosity but I'm not interested enough in the game to pay the launch price for it.
Obviously no hardcore gamer is going to be happy with this, but in general I'm very happy with the trade-offs Apple has chosen for portable macs and when I do replace this with an Apple Silicon machine, I will be slightly disappointed at losing the ability to play the occasional triple-A game, but not enough so to seriously consider other options.
I keep seeing comments like this pop up, and I don't get it. Of course if you run the game on some other computer and only stream the video output to the client you can play any game on nearly any device. That's not what anyone's talking about when they say you probably can't run Cyberpunk 2077 on an M1 Mac.
I've considered it, but I've tried Steam Link inside my local LAN, and Playstation NOW over my gigabit cable connection, and I find both to be too laggy for my taste.
Unreal Engine and Unity both have mobile ports, so already support ARM in some manner, and they provide much of the heavy lifting for the low-end stuff for many AAA games.
That said, I'm not sure if Unity will have an advantage here because of its reliance on C# compared to UE's C++ (but I'm not sure how common it is to use C++ compared to UnrealScript). Then again, if you can trigger the X86 consistency mode on the M1 (or if it's always on?) for ARM instructions (if that makes sense), then maybe it's as simple as a compile/release for that target from each of those platforms since the memory model will be the same for the C++ code.
Note: I'm not a game developer, and have only looked at/used minimally at those engines/suites, so there's a lot of assumptions in the above based on how I understand stuff to work, not how I know it to work from experience.
Also you need the rest of the integrated system-on-a-chip, which no one else has designed and poses fundamental challenges to the PC hardware industry. Also you need to secure 5nm chip fabrication capabilities, which is awkward since Apple just bought all of the world's capacity.
Then you get to start working on making the software work.
> ...Apple caught everyone with the pants down in the PC industry...
Pffft. There's no competition between PCs and Apple, they're not even in the same class. PCs are general purpose computers and Apple products are closed platform devices that you don't really own or have any control over.
That's why Macs only have 10% the market size of PCs.
Meh. Phones I can take or leave. Personally, I will never, ever use an appliance as my workstation though and I'm a user, so my opinion matters much more than any manufacturers because I'm faaaaar from alone ;)
I'm wondering what's happening at Microsoft's camp regarding M1.
They are in a weird situation. They depend on AMD/Intel to deliver. This is a massive dependency for Microsoft. Are there other large corporations have this big of a dependency?
I understand that Microsoft's revenue is shifting towards cloud, services, etc. But I think Windows is still "the" key to a lot of those offerings. Take a massive chunk of market share from Windows and I'm pretty much sure all those services will have a hit too.
Businesses above a certain size overwhelmingly use pcs unless they are in specific arty sectors. M1 won't change that, or at least not rapidly.
I think Microsoft have 5 or 6 different lines of business that bring in billions or more of revenue: windows, cloud, gaming, office/productivity and surface. They are the most diversified of the big players.
Totally agreed. I'd also want to add something for this:
> Businesses above a certain size overwhelmingly use pcs unless they are in specific arty sectors. M1 won't change that, or at least not rapidly.
Besides the fact, which many HN users working for FAANGS or startups don't encounter, that big enterprises are both rigid and also quite locked into the Microsoft/Windows/Office mono culture, big companies tend to have very elaborate purchasing processes.
Things with just 1 vendor tend to be completely ignored, if they're meant to be used throughout the company. There is a certain irony with my previous paragraph, but Microsoft stuff is usually grandfathered in, the rules for new acquisitions don't apply to them. On top of that Microsoft has super solid backwards/forwards compatibility, migration paths and an amazing history of working with enterprises, as much as they are hated especially by the Open Source community for other things.
So you can expect ARM Macs to be adopted and used, but most likely frowned upon by IT departments and CIOs if too many people want them. With Intel they can have a bidding process or at least a product comparison between Dell, HP, Lenovo, etc. With Apple it's... just Apple. Also Apple's B2B side is super underpowered and their history is very chequered due to their frequent breaking changes.
I have seen that business orgs within an enterprise with their own P/L usually end up doing an end-run around IT and just buy MacBooks from suppliers or directly from Apple.
Funnily enough, both times I saw this happen (at a large bank and later at a utility company) the only thing we needed from IT was network access and a Windows VM license so that we could run the god awful company payroll app that required IE11 to work. Once a week dozens of very highly paid devs and SREs would boot up a Windows SOE VM, login to the domain, start the payroll app, enter in five rows of data and shutdown. What a waste of time and money.
We kind of have this setup for those who want Linux as their desktop OS, it's provided by our IT and it's great.
If you want a Linux workstation, IT will provision you a Windows VM. It has email, Office, and our payroll app. With most stuff on Teams you can ignore it most of the time and get on with your work. IT's happy, we're happy.
Second that. Can I have Apple serviceman in 4 hours at my doors after I call that something is broken? Will that serviceman arrive with the new computer just in case the old one cannot be repaired? Dell, etc. has that and this is something IT in every company wants.
Yeah, Dell definitely has enterprise support all worked out, and can use the economies of scale on that to provide what are usually rock solid systems which they partially assemble themselves. If you're shipping a few 100k units of a desktop spec, it pays to make sure you can control costs be removing problematic components, and in their case, they often write their own firmware so they can quickly work around bugs.
Dell may actually be the PC provider closest to what Apple does, since I believe they source components for their own hardware in some cases (or use white label in others, but with their own firmware as I noted).
That said, I think the only thing that prevents Apple from competing in this space is desire and experience providing that service to enterprises at scale. It's not like they don't have the money or manufacturing pipeline, and lack of experience solves itself after a while.
>Things with just 1 vendor tend to be completely ignored, if they're meant to be used throughout the company.
I disagree. I work in such a company and they are always looking for one and only one vendor for their millions of PCs. It used to be Dell, now HP but they completely outsource procurement, maintenance, repairs and disposal of the computers to that one full-service provider.
Apple wouldn't be considered because they don't offer this full-spectrum service.
I think what he means with "needs more than 1 vendor" is that there exist a competitor to Dell (or now HP) that provides roughly the same service they can switch to if they are not happy with their services. With Apple the only supplier is Apple. There is no other supplier you can switch to if the relationship with Apple gets to the point that you can not work with them anymore.
This ignores the operating system from the equation. Basically there is no effective replacement for Windows or macOS (or Linux). Whichever platform you pick you are stuck with due to legacy/having to retrain hundreds (if not thousands) of workers if you want to switch.
I’ve contracted in lots of large enterprise, and software engineers who prefer (and are provisioned) Macs are everywhere. I’ve even seen a number of .NET teams who bootcamp with them. I’m contracting in a reasonably large bank right now, and most of the people in my office use MacBooks.
It’s true that large enterprise is slow and cumbersome. But they really don’t care too much if they’re buying you a MacBook, or a surface, or a Dell. Their main concern is whether they can properly manage all their devices, which isn’t too hard to do these days. Whether a large enterprise has MacBooks or not depends a lot more on whether or not they’ve figured out how to use JAMF than it does anything else.
There is no such large enterprise. You need software engineers at the very least to operate an ERP, and any organisation that doesn’t rely heavily on an ERP to operate is not a large enterprise. Any large enterprise will also have an infrastructure team, comprised of people just as likely to have strong preferences about their operating systems.
Technical and design staff may make up a relatively small portion of the headcount for some of them, but the Windows-only enterprise is becoming increasingly less common. The primary reason being that the tools they need to support other operating systems are simply more capable and accessible than they have been in the past.
I don't know in what kind of circles you mingle, but the software engineers I know that are working on ERPs are VEEEERY different to the ones that are working in FAANGS and startups. Think (Notepad++ and JDeveloper) vs (vim/emacs and cargo).
And as far as enterprise IT, there are a ton of Windows infrastructure folks, and again, there's little overlap between those and FAANG types. Plus infrastructure folks frequently hate Macs, for example (the stuff to manage fleets of Macs is vastly underpowered compared to stuff used to manage fleets of Windows PCs).
Plus even in those companies, the ERP & infra folks are maybe 1-5-10% of the workforce and they're generally in supporting roles (frequently they're temps or consultants), not in core business roles. Nobody listens to them :-))
And they generally want to get rid of them, they're seen as an expense.
I do agree that Windows-only places are becoming less common, but in big enterprises it's far from agreed that going outside the Microsoft ecosystem has a good ROI.
I think you’re being a bit judgemental and overly stereotyping people here.
In any case, the reason enterprise adopts these technologies isn’t directly because of a technical ROI. It’s because giving people the tools that they want to use is generally seen as beneficial. Providing good UX to internal users is becoming just as trendy in enterprise as DevSecOps and Service Oriented Architecture is. There’s plenty of fairly obvious business reasons why enterprise would care about these things. One of the more important ones is that these organisations are competing FAANGs and startups to hire talent. If you walked around the bank office I was working in today I doubt you’d be able to tell it apart from a typical well-funded startup.
This to me is the interesting spectator sport for M1 (I have no intention of buying Apple hardware) - what are the competition going to do in response?
Based on all the reviews and benchmarks, the new generation of Apple laptops are going to outperform anything AMD/Intel, while having twice the battery life. These are not 'small difference' only of interest to geeks, these are large user visible differences that could make everything non-Apple look inferior.
I don't see x64 suddenly getting competitive on this front any time soon, so are the 'traditional' laptop vendors (e.g. Lenovo) going to have to pivot to ARM as well? Possibly segmenting so that 'gaming' or 'workstation' are different market sectors that stay on x86. Even if they do switch there's going to be lag for development, and there's no guarantee that the software landscape will be as set up for it as on Apple.
As someone whose purchases are 'second hand thinkpad, install Linux' I'm all for it. If you pick the right distro, Linux has supported ARM well for a while.
First, I think the "rapidly" is key here. Momentum / current market share for enterprise and gaming is strongly in the x86 / Windows space.
Second, the M1 successors have to cover a lot more use cases first. There are certainly use cases where M1 is winning single core performance, and some for parallelism, and nearly absolutely in efficiency and battery life. There are still plenty of cases where software either only works on Windows, or it's where many users are currently using it, and accustomed to using it. There are other cases where "M1 for x" doesn't exist yet - high-end workstations, for example. We have rumors and expectations for this to come out in over the next two years, and assuming the competition stands still, you might start to see cases where the time savings on heavy computation favors the Apple Silicon options heavily enough to overcome inertia.
But I would suggest people look at Intel and AMD in the Windows space for a precedent. AMD multi-core performance has been dominating Intel for about 3 years now... guess who owns the market share? It's still Intel. The momentum definitely takes years to overcome. AMD is at something like 20% of Windows notebook share after about one year of having dominate mobile chips. See this - https://www.pcworld.com/article/3588154/amds-notebook-pc-sha...
> Momentum / current market share for enterprise and gaming is strongly in the x86 / Windows space.
Sure, but they're not just going to sacrifice the consumer market entirely. The industry as a whole will need to respond.
> There are other cases where "M1 for x" doesn't exist yet - high-end workstations, for example.
For now. In a year, I wouldn't bet on it.
> AMD is at something like 20% of Windows notebook share after about one year of having dominate mobile chips.
AMD beat Intel by a little in efficiency over the last year (and for the mainstream laptop market, after a certain point, that's mostly what people care about). M1 is a _dramatic_ jump forward, there. "Here's an AMD and an Intel laptop for 1k each, the AMD one is a bit faster and has slightly better battery life" is a different proposition to "Here's the same AMD laptop from earlier, plus a $1k MacBook Air. The MacBook Air's a bit faster, and has twice the battery life".
The Ryzen 4000 laptop chips outperform Intel's as much as 45% in benchmarks, while getting 33% better battery life.
The M1 is roughly competitive with the Ryzen 4000 chips (depending on variances in TDP, cooling, etc.) but arguably gets as much as double the battery life.
No argument that it's an advancement of CPU technology, but I think you might have some magical thinking if you think momentum doesn't matter.
It's not like suddenly 100% of buyers look at only benchmarks where the M1 leads, or only at battery life, or only at software that runs on the M1 rather than all the software they use.
If users were only looking at CPU benchmarks, AMD would have a lot higher market share, and the same will be said about the M1 after a year as well. Will Apple users replace Intel laptops in record numbers? Probably! But the conversion of Windows users will be slow regardless of hardware.
On the ARM front, MS launched the Surface Pro X last year that runs on an ARM chip they developed with Qualcomm. It is way behind M1 in terms of performance and battery etc but its the same kind of idea - fast ARM based ultraportable with emulation for non-arm apps. So they do at least have a 'dog in the race', even if its a slightly lame dog at this point. I imagine they will be throwing increasingly more resources at ARM in the future.
Folks here don't care, if their Business Software runs whatsoever or is whatsoever faster as long as it is cheap on a per unit basis. There is no such thing as ROI in Excel tables for controllers in my company. ;)
Microsoft does not exactly need Intel or AMD. For Windows, which is massively popular, they just need hardware that runs it well. Right now, the best hardware for Windows includes x86 CPUs from AMD/Intel and GPUs from Nvidia/AMD.
If Apple releases Silicon that begins to dominate in other areas by enough of a lead that it can't be ignored by purchasers, then Microsoft could start to sweat, assuming no one finds the hole in the market tempting enough.
So I would think the real question is how long will it take for competitors to step up their CPU efficiency.
Intel has ground to a halt, essentially, on the efficiency front. They continue to struggle with process improvements, and have had to do some tricky engineering to work within the constraints of an older process, while also having to settle for increasing power demands.
AMD has been fairly relentless in their forward march over the past 3-4 years, improving efficiency on their x86 chips. The next couple of years are going to be very telling how well that march of progress competes with Apple's march with their ARM-based CPUs.
There's still so much popularity in the Windows/x86 space that AMD likely doesn't feel overwhelmingly threatened, but one would imagine they do not have their collective heads in the sand, because they are still a market underdog. They cannot risk resting on their laurels while Intel catches up, and they are likely very well aware of Apple's thrust into the competitive CPU landscape. Whether they must switch to ARM to remain competitive is not yet clear, but will be in time.
They depend on AMD/Intel to deliver. This is a massive dependency for Microsoft.
The moat that Win32 provides has been eroding for a long time. That's why Microsoft has shifted their focus to the cloud. This technology might slow things a little but the writing is on the wall.
Most enterprises want to deliver software through a browser, the last thing they want to do is install software on the client. My customers have been telling me this for two decades now. They don't even like having Office run locally, they will drop it as soon as the browser version is viable.
Microsoft will be fine, they appear to be a long way along that journey already. I'm not so sure about Intel though. It's in Microsoft's interest for processors to become a commodity product. They would be over the moon if the competition sucked all the profits out of AMD/Intel.
The people who should be most worried though are the HP/Lenovo/Dells of this world. They have been happy to be carried along on the coat tails of WinTel for decades now. They will struggle to adapt to this new world.
I think MS is in trouble. Nearly everyone's home computer just runs Chrome now. Work PCs are going the same direction. Even developers can probably rely on a SSH and RDP to a server running anything. Android tablets are selling for about the same price as Windows license.
PC/Laptop and Microsoft are thriving. You can't even buy a modern GPU from AMD or Nvidia and Ryzen cpus are sold out everywhere - and there is a massive new console launch with X86 on new Xbox and PS5.
Windows has supported ARM for a while and they have a nice arm device right now.
New mobile Ryzen chips are extremely fast and efficient. X86 isn't going anywhere.
If Apple is successful putting algorithms into silicon for micro efficiencies Intel/AMD/Microsoft aren't going to sit idly by.
Good point on the consoles. I'm just thinking about my office and seeing thousands of new PCs bought every year. We're now getting more terminals and virtualized PCs running on servers. There isn't much reason to stay x86.
MS have declared Windows 10 as the "last version of windows" [1] as it's essentially transitioning into a free serviced OS like macOS and ChromeOS which after the initial OEM license cost will be receiving free updates indefinitely.
MS have been monetizing it as an additional ad platform for its other products & services like One Drive/Office 365/Edge, etc. which I frequently see popup.
In that light it makes sense for them to support ARM/M1 and as a goal have as many installations of Windows as possible as Google does with their OS's.
I'm expecting them to follow suit with bespoke silicon. They are already doing this in a limited sense with the Surface X. I bet Apple has now really lit a fire under them
Looking at their past actions, Apple will do everything, legally and technically, to prevent Microsoft or anyone else from "commoditizing" not only their hardware but also software. Yes you can build a hackintosh yourself, but if you try to sell it, Apple will intervene. Also, technically there is no problem with running macOS on VMware, but you need a patch like Unlocker specifically for that purpose. Their money comes from a high level of control, they won't give it away without a fight.
I'm pretty sure we'll see M1 copies/analogues soon enough (same design ideas, even if the processing cores are worse). That will spread around the world very fast. Kind of like AirPods.
Cheap is relative. Right now their M1 devices are objectively the cheapest practical ARM64 workstation products on the market. I don't see why they shouldn't be able to maintain that lead for a while as costs come down for ARM64 workstations in general. And I think it is reasonable to expect that ARM64 platforms will eventually be cheaper than x86 ones.
When you are making an effort to buy a $400 laptop than any flagship product from any vendor is out of the question. $400 products are not what define the future landscape of the market.
I am not trying to say that the entire impoverished world is going to go out and start buying Macbooks because of the M1. What I am saying is that practical ARM64 workstations are now a lot more plausible at every price point because of what the M1 demonstrated.
One of the parents talked about the rest of the world. Hence my reply. You don't have to be in the 3rd world to consider an Apple product extremely expensive.
I agree that practical ARM64 Workstations will be more feasible now, but there's still a large gap.
There's a whole market of refurbished 1st world componentes. I talk a lot with people from LATAM and they're are used to buy 2nd hand Xeons with Chinese motherboards for their prosumer market.
They even sell packs of Xeons + motherboard + Memory (it was DDR3 recently, IDK how much that changed).
In many countries people hacked early 2000s games and there are communities maintaining such games because many people has not the means to play the last titles. Nor money to pay for subscriptions and such. They pirated the whole ecosystem.
This is pretty common in Russia and Turkey too. I guess in other countries it is too.
I do have websites in Spanish and try to make them lightweight and avoid JS because I know some of my visitors will have a hard time with it.
We are talking about the most valuable company in the world, and the M1 release was covered enthusiastically the world over. The notion that it's US-only is absolutely preposterous.
Apple is a significant player in mobile in the entirety of the West. Canada, the UK, Germany, France, Italy, the Netherlands. And it's PC saturation is virtually the same in Canada and Europe as the US. Even in European media it is incredibly common to see Apple devices (phones and PCs) as the placeholders.
Apple even has 20% of the mobile marketshare in China. That's incredible. It has 60%+ in Japan.
The corporation buying a fleet of low-end PCs for seat warmers aren't going to buy Apple (nor are they buying Surface Pros. They're buying the junkiest HP or Dell they can get). But there are as many "artsy" people, or people who care about the experience, in other rich countries as well.
Go to a mall in Bangkok and you will lose count of the number of people walking around with iphones. I don't know what you consider a "tier 2 and tier 3" country, since I think you just made that up, but maybe you should travel more.
In Italy iPhones used to be 50% of the market, because they were a status symbol and it was literally the only smartphone available, ten years after they have dropped below 24% globally, but new sells are much less, because
The number of people buying their first iPhone is declining. Apple's annual Worldwide Developers Conference this week will showcase how the technology giant is diversifying so that it no longer has to rely so heavily on its signature product.
-- 20th June 2020 - Bangkok Post
Regarding new phones Apple is now third with 10% of the market share in Thailand behind Samsung (22.3%) and Huawei (17.6%)
Consider that 25% is a minority, 75% is 3 times larger.
Apple is not gaining market shares anywhere, except in the US.
In China it dropped from 25% to 17%, in Europe from 17% to 14% in the last year.
The article was about ARM in computers, Apple computing devices are only 8% of the total in Thailand.
Those were not personal attacks, they were explaining that apple stuff is desired around the world and even in countries with lower incomes, people will save up to buy apple hardware, buy it used, etc.
If there were any such thing as "tier 2 and tier 3 countries" in any broad sense, I'd say that's quite a lazy and disingenuous transition from "only on HN" and only something the US cares about, to "outside of the vast bulk of the world's economy".
Further I don't think you understand what "significant" marketshare is. Even 10% marketshare is significant, where the actions of that market player is impactful. But Apple is pushing way above that almost everywhere.
This circular logic is more disingenuous noise (the userbase of HN is utterly irrelevant given that you're trying to prove your own absurd claim that only HN cares, when clearly the entire developed world cares). How are you not moderated down to transparent by now? [I will tell you why - because the title draws in a certain crowd that is comforted by your nonsense, however fictional it is]
Whatever you hope the position of Apple is doesn't correlate with reality.
Aside: Ferrari has between 0.01% - 0.1% marketshare in a given year in the richest countries, and that is your example why Apple -- at 20% or so in most of the world's economies -- is irrelevant. That is some perilously embarrassing nonsense. Hyundai, in contrast, has 7% marketshare in the US and is considered a pretty big player.
> Apple -- at 20% or so in most of the world's economies
Apple has 17% (and dropping) of the mobile market share in Europe and around 9% in computing.
There aren't many other richer economies in the World.
Ferrari is worth now more than General Motors and Ford, so market share and net worth are not comparable.
Their market share is small because they make a luxury limited edition product.
They limit their production capacity to around 10 thousands cars/year (all sold in pre order), Apple is trying to sell as many devices they can.
There are no Ferrari Store where you can buy a Ferrari and drive it home half an hour later.
It's not a small difference.
Last but not least, there is a lot more competition between car manufacturers, even in the most expensive segments, Apple devices are only made by Apple.
Hyundai, for example, had a net income of over 3 trillion in 2019, Apple net income is measured in tens of billions.
M1 can accelerate the shift of home users to Macs.
Microsoft devices are already out-fashioned by apple. The Windows UX is also subpar comparing to Apple. Now the hardware is also gonna completely fall behind.
If all the young people get mac's for their home computers, will the next generations use Windows at workplace?
Im talking about 10-20 years landscape for Windows and Office.
I find the macOS UX to be inferior compared to Windows. This is mostly a matter of habit, and what people are used to and how they expect things to behave.
All the young people I know (OK about a dozen nieces, nephews and neighbor kids) got PC laptops. Maybe I live in a bubble where kids playing video games is more common, but it seems like getting $300-500 laptops that can play thousands of PC games (and be used for homework) is still a pretty popular option. (Some of the older youth are getting $600-1000 PC gaming laptops instead.)
> Now the hardware is also gonna completely fall behind.
This may happen. For efficiency/battery life, it already has. But a handful of synthetic benchmarks where the M1 outperforms high-end AMD desktop chips in single core seems to have overshadowed the cases where those same desktop chips (and even some comparable 15W mobile APUs) outperform the M1 in other tasks.
I'd actually see a desktop Linux or desktop Android as more realistic unless Apple stops being so intentionally hostile to their users and treat them with some dawdling infantilism. This cult game they're playing is going to eventually crash and they'll have to start designing software that's not just theatrics and showmanship
I haven't noticed much acknowledgement of sole traders, as they're called in the UK. Very often when I see someone who's working for themselves they have a Macbook Air.
While they're more expensive to buy new than many Windows laptops, the general sentiment from those I've spoken to seems to be:
* It's worth the extra money for more reliable operation and fewer worries about malware.
* Buying used is much less of a lottery. They tend not to have hidden problems.
* Selling used means you actually get some return to put towards your next device.
I see people using Office and Chrome or Safari - and not much else.
This same narrative has been touted for many decades now. I have heard it for 20+ years, yet the rate of windows usage in the quantifiable sense remains has varied only slightly.
> M1 can accelerate the shift of home users to Macs.
I can't see the M1 making a lick of difference here. Home users aren't going to double their budgets to afford an Apple machine, especially one which can't even play most games.
> Home users aren't going to double their budgets to afford an Apple machine...
If I were to switch to a Mac mini I would pay about $350 less than it cost me to build my main workstation/gaming machine. According to the benchmarks and real-world tests, I'd gain quite a bit in performance over my Ryzen build, with a fraction of the power consumption which is currently 120W idle, 310W maxed out as measured by my UPS. Per real world tests, the M1 mini never goes above ~35W and stays cool and quiet even when maxed out on stress tests. Given the fact I no longer play games much at all beyond WoW and a few indie titles, and my non-gaming workflow is 100% doable on macOS, I'm sorely tempted to switch even with my reservations about Apple as a company.
> ...especially one which can't even play most games.
No one buys a Mac for AAA games, that's a fact and probably won't change immediately. With that said, Blizzard has day one native Apple Silicon support for WoW, and the benchmarks indicate the performance (depending on the game engine) sits somewhere between the GeForce 1650 and 1660 for most of the games tested so far. That covers pretty much all non-AAA titles on the performance side, leaving portability as the only real issue. I've seen tests done with Steam games and most macOS native x86_64 games make the transition via Rosetta2 quite nicely.
Not to mention, M1 Macs can now run most iOS apps natively which opens up a horde of games to the platform immediately, including popular titles like Minecraft, Fortnite, and CoD.
I do think that's the disconnect. Apple has eschewed gaming whereas Microsoft has embraced it. It turns out that gaming is a huge moat, and until Apple decides to invade they will be relegated to second place in the PC market.
On the flip side, the PC market is less and less relevant as phones have become the primary computing platform in the last 10 years.
And macOS Catalina deprecated 32-bit apps which set the stage for a pure 64-but Rosetta 2 emulator for the M1. The 32-bit deprecation makes more sense now.
That is true, although it's worth saying that (if I'm not mistaken) outside of patents we don't really know what the micro-ops do i.e. RISC possibly isn't the best descriptor.
The microarchitectural classification of a modern CPU is slightly murky, i.e. a modern x86 has a pipeline a la the first one to do so, but the way the CPU actually uses it is completely different (i.e. OoO, speculative etc.)
This makes Windows on Apple M1 devices more attractive. I wonder when Microsoft will take up Apple on their offer to create a bootable OS for their hardware.
If I was Microsoft, I would be very reluctant to support Windows on M1 Macs.
The obvious disadvantage is the consumer will get Mac OSX for free with the hardware, and won't be too keen to pay much more to Microsoft. Apple will end up earning most of the revenue.
The strategic disadvantage is that as windows users move to M1 based macs, sales of PC hardware will drop, since the PC ecosystem is hobbled by a lack of deep integration, lower performance CPU's, lower profit margins, less R&D, etc. If volumes of sales drop too, it could trigger a cascade effect (fewer sales causes less R&D causes less desirable products causes fewer sales). That could kill consumer PC's entirely, which would be very bad for the future of Windows.
> The strategic disadvantage is that as windows users move to M1 based macs, sales of PC hardware will drop, since the PC ecosystem is hobbled by a lack of deep integration, lower performance CPU's, lower profit margins, less R&D, etc. If volumes of sales drop too, it could trigger a cascade effect (fewer sales causes less R&D causes less desirable products causes fewer sales). That could kill consumer PC's entirely, which would be very bad for the future of Windows.
For this to happen Apple would have to fill in MANY, MANY niches, especially cheaper ones. Apple does not and will not fill those. Well, "never say never", but I don't expect them to sell a $400 laptop, all accessories included (so no sneaky $400 laptop with semi-mandatory accessories which bring the price to $700).
With M1 they have a single platform though. Phone, tablet, laptop, desktop. All on Apple silicon. The iPad already supports BT keyboards. M1 already supports iPad apps. I imagine an iPad is already powerful enough to run MacOS.
I imagine the next step is to unify the OS.
If that was even possible, you would have to wait at least 10 years.
At that point who knows if tablets as a form factor survived.
To be clear, I think that a tablet (not specifically iPads which are expensive in their segment as any other Apple device, there is no cheap option) could replace most PC nowadays, but to reiterate what's already been said: Apple is not present in too many places, for a reason.
To make an example: I work for a company in Italy with 14 thousands employees.
The laptops they give us software developers today are gonna be the accountants PCs of the future. Unless they break before. In that case they are replaced by the supplier with something with similar specs but new. Probably it will be a different brand, depending on what's available at the moment.
It's not imaginable that a company like this, which is relatively small if we look at the real giants of the World (including many in Italy as well), will replace every PC with tablets, not only for monetary reasons, but because they had to re-train thousands of employees. Being in Italy I'm sure that only talking about it would end up in a strike. No kidding.
But even assuming that it would happen, they would buy cheap Asus tablets.
But even assuming it would be iPads, does this software the company has been using for 15 years runs on it?
The answer is most certainly "no, it doesn't".
Battery life and power efficiency is not a factor in such environments, they prefer to plant trees, be part of a renewable energy consortium or power the offices with solar panels, that they can also spin in PR, than buy Apple devices because they have an incredibly power efficient CPU.
It doesn't really matter to them.
And I tell you this knowing that I work there because I know that ethically speaking they are vastly better than the average. They really do care of many small things that many others don't, but Apple ARM CPUs aren't one of them.
iPads are very limited, functionality wise. You barely have multiple windows. You can't really plug peripherals in. RAM is really limited. Etc.
_______________________________________
Also, HA HA. I've checked and I was right. As far as I can see the cheapest one is $329, BUT with 32GB of storage, which is absolutely ridiculous in 2020. 128GB costs $449, and even that's barely enough storage. The keyboard is $159 (!). So the minimum usable laptop replacement from them costs $610, and I'm probably missing some accesories you need and which probably increase the price even further.
Let alone the fact that for the rest of the world Apple products are 10-20-30% more expensive. So this combo probably costs €700 ($800+) in Europe.
I haven't seen an $400 laptop you can upgrade the memory for. Perhaps they exist, but I've never seen one. The $250-300 chromebooks are much weaker than an iPad.
I don't know if you've noticed, but Apple did release iPhone SE2 which is an almost-flagship at medium price. An iPad with a slightly different iOS and a cheaper than $129 keyboard (there are $50 bluetooh keyboards that work with the iPad, not by apple, that are comparable to the keyboards that come with $300-$400 laptops), perhaps they'll call it "iMac SE" or "iPad SE".
The processor inside last years iPad is a slower, more limited version of the M1. Apple could, and I believe would, release a cheaper Mac-compatible machine, just like they did with the iPhone SE.
It made no sense to do so before the real Mac switched to apple silicon (Intel's offering are both expensive and inefficient for this use). Now it makes perfect sense. It make take a year or two, but it's the logical progression.
Apple are playing the very long game. Saying "you barely have multiple windows" is indication you are not looking farther than the end of 2021.
Let's see. I know they are playing the long game, but you're grossly underestimating how big the market for cheap stuff is. Even if it's subsidized by spyware.
I doubt Apple wants to be in that market, it's a gross market (low margins, tons of support costs, etc.).
Just to give you an example, French PC site, laptop section for €400 to €600 (they also have 10 or so under €400):
So about 230 (!) products in total, for a single site. In poorer countries, you can find even cheaper laptops.
Similar thing for phones, a $400 phone is not cheap. A $100-150 phone is cheap, like the Huaweis, Honors, Xiaomis, Oppos, Vivos, etc of the world (you don't even get those in the States).
Every time I see the selection of cheap laptops/tablets/phones I wonder "why even try to combat climate change?".
You've got these bottom of the barrel machines that fail very soon, and barely have enough performance as it is. It makes more sense to save up, buy something twice the price and use it for much longer.
It's like the boots theory of wealth. You buy cheap shit over and over again, instead of buying something good that will last many times longer. The sad thing it doesn't just affect you, it affects everything and everyone.
On the other hand, people using cheap devices usually don't rely on them so much, so the higher consumption is compensated by the lower overall usage, or have made the real environmental savy choice.
I've never encountered an iPhone owner that didn't bring a charger in their backpacks.
Do you see old people like my father charging their sub 100 € smartphones at Starbucks or young people with shiny high end smartphones?
My father use his smartphone so rarely that the battery lasts exactly as advertised: 2+ days on standby (I believe it's ~60 hours).
He uses no app, except WhatsApp twice a year and Maps, has no background service running, takes a few pictures of his niece and that's it. How much power can he consume? How much is he contributing to the climate change compared to me, his son, that starts the day at 9 am and at 18 is on 20% battery?
An I don't even use the smartphone that much, but hey, WFH, slack and teams are constantly checking for new messages, that friend sent you a message in IG, of course I'm gonna check my timeline. Look, a new notification, Amazon is delivering the package, let me check on the news if the streets are still closed today because they are shooting that Tom Cruise movie (it really happened few days ago!)
Etc. etc. How much of this could be saved?
On the opposite side, the construction workers who renewed my house all used old phones like Nokia 3330 because they need something though and durable that won't die on them after a few hours, in environments where electrical power is not granted, so they go for the thing with the longest battery life on the market which is also the cheapest option and, in the end, the most power efficient.
I checked, they have 8 under 400euro, all of which are celerons or chromebooks -- generally much more limited than a modern iPad.
400euro to 600euro is actually $480-$720 -- which already includes your $600 "iPad + Keyboard + more" straight in its middle. A whole M1-Mac is $1000, which is 825eur, not such a huge stop from 600eur and delivers a ton more.
The iPad hardware is perfectly capable of running full MacOS, and when apple figures out the market segmentation that extracts the maximum revenue, they likely WILL introduce a cheaper Mac to take any money they are currently leaving on the table.
It makes sense to me that it will actually be marketed as an "iPad with keyboard that can run Mac apps", rather tnan a Mac, for the purposes of market segmentation and support (People expect more support from Apple for Macs than they do for iPads, and that factors into the price).
Prediction is hard, especially about the future. But Apple has proved (with iPhone SE, the new M1 Macs, and in other cases) that they are willing to attack any market segment currently held by others.
When they do it, if they do it - they'll do it by cannibalizing sales from others, not from themselves.
I see it completely differently: if Microsoft wants Windows on Arm to have a big future, they should push for Windows on the M1 Macs. This would be a showcase, how well Windows can run on ARM, and as Apple machines gain market share push Windows developers to provide ARM-native versions of their software.
Windows on ARM so far faced two obstacles, the lack of native software and the lack of great machines.
Having Windows running on ARM Macs could deliver a helpful push here and also keep Windows relevant for Mac users. On the other side, there is no greater risk, as Apple isn't taking over the PC market any time soon. Apple will likely increase their market share, but even then it is just a fraction of the total market.
>"since the PC ecosystem is hobbled by a lack of deep integration, lower performance CPU's, lower profit margins, less R&D"
1) Deep integration with what? I do not feel that I am missing anything. So far can do anything I want.
2) What low performance? Depending on money spent desktops and laptops perform just fine. M1 is a good chip but new AMD is faster on single core and on multiple cores it will wipe the floor with M1. So far M1 is more power efficient. I am pretty sure given time AMD/Intel will catch up to that if there is good ROI on that particular quality.
I don't think that Microsoft care that much about the one-digit percentage of their user base who will use M1 Mac. MS force is in the office (as in the place and as in the software package), and a) it's hard to fathom majority of companies all around the world ditching their PCs and buying Mac Minis and b) there is the Office package on the mac already, and Mac users are already paying for it.
They don't need to "support" Windows on ARM Macs. They just need to open up the licensing model so that users can buy Windows for ARM from Microsoft and install it via Bootcamp. Apple has already stated that they "have the core technologies for them to do that, to run their ARM version of Windows, which in turn of course supports x86 user mode applications".
While I can't say it would outweigh the disadvantages here the plus side would be more incentives for developers to create native ARM builds of their application for Windows making the user experience better for everyone. Having emulation is nice but having native applications is even better.
With Intel based CPUs, Windows support was an important feature and selling point so Apple created Bootcamp. Windows ARM is far less appealing that its x86 counterpart was 15 years ago, so Apple didn't add Bootcamp. With their hypervisor it seems like Windows will run just fine in a VM instead.
I'm not sure Microsoft is interested in selling a version of Windows that runs atop MacOS as a child OS. As things currently stand, Windows running on the Mac embarrasses the heck out of the Surface Pro X. That's without decent driver support. How would Microsoft look if the fastest way to run Windows on ARM was as a child OS on a Mac?
I think it's more likely that Microsoft will offer a cloud version of Windows for Mac users. A nicely packaged app that connects to an instance on Azure.
What will happen is Microsoft will introduce a shiny new Surface in a barrage of marketing which rips off most characteristics of the M1 Macbook. It will contain an SoC which is a vaguely rebranded off the shelf one. It'll be entirely held together with glue and tape and receive a 0 rating on iFixit. There will be no support network in most countries other than North America where the support will just be terrible anyway. The software situation will be a shit show universally with bugs, slowdowns and problems galore. All the app vendors will refuse to poke it with a 6 foot long shitty stick. Eventually CTOs will buy a few, they will die within 6 months and be left in a corner in the office until their spicy pillows inflate enough to cause alarm and go in the WEEE bin for recycling. Microsoft will puff up its chest like a pigeon and strut around declaring victory. The outcome of this will be all OEM vendors finding an even cheaper way to shift no brand OEM ARM laptops to the lowest bidders via the retail channel now they don't have to pay Intel any money. They will be stuffed to the brim with crappy user experiences and barely working software while microsoft takes their cut from the windows license as it always has done.
Microsoft are between a rock and a hard place. They've done a very good job maintaining compatibility with windows/DOS x86 programs as hardware and the OS grows, but eventually for windows to survive and be competitive they need to ditch it and do a successful port to the next generation (presumably ARM).
This will be challenging and painful for many of their existing users.
I think that's one of the reasons why they created the .NET framework: To be able to create a new "virtual" plattform that can be ported to other environments. There is a lot of C# software out there that should run on ARM without any issues with a .NET runtime.
That was also the raison d'être of the Unified Windows Platform apps: to stop depending on Win32 APIs, which are tied to the system architecture. Not a bad goal IMO.
I wish we'd see more architecture-neutral binaries, based on LLVM or webassembly.
I've worked on zSeries machines running S/360 binaries from the early 70s. It is one of the primary reasons banking and related sectors that are heavily regulated stick with such platforms for core accounting.
The single whole-bank migration I was involved with was zSeries to zSeries.
This is where the challenger banks really have an advantage. They're not saddled with the baggage of an overnight batch run of JCL decks and Data Sets to arrive at what their regulators consider the reconciled position of the bank.
Mobile game development is heavily biased around Mono across Android. It is perhaps the biggest existing proving ground of the architecture independence of C#.
The ARM platform is currently massively held back IMO by its lack for standardization. I think there is tremendous value hidden behind a company that achieves a push for consistency. Microsoft has invested tens if not hundreds of thousands of man years in this, so who would be better qualified for this than them.
RISC-V is kinda in a similar situation as ARM. If I were Microsoft, I'd be investing hard into the RISC-V platform as it still has tremendous growth left.
Do you mean device probing? There's a standard API implemented by Linux for the firmware to communicate this to the kernel, also PCI works on ARM just like x86. No one bothers with this because all of the popular ARM SoCs tend to come from organizations that are extremely hostile to their users though.
Do you mean user space ABI? GlibC on arm7 and aarch64 has had a pretty stable ABI for a while now, I can personally share binaries between all my arm machines that share an architecture (not that flinging binaries around is a good thing.)
> There's a standard API implemented by Linux for the firmware to communicate this to the kernel, also PCI works on ARM just like x86. No one bothers with this because all of the popular ARM SoCs tend to come from organizations that are extremely hostile to their users though.
That's the problem. Most SoC vendors just don't care and implement drivers in or around forks of the kernel, and also specify the hardware via device trees. IIRC having the firmware give the device tree to the kernel instead of it being baked into the kernel is a new development.
If MS came and said: we want stable interfaces, the vendors would buckle and MS would be able to ship one OS image to all devices, instead of having hw custom images.
> The ARM platform is currently massively held back IMO by its lack for standardization.
Note there is progress there with things like ARM's ServerReady which standardises a lot. The Windows-on-ARM requirements also push a lot to be far more standardised (it effectively requires a subset of the ServerReady requirements AIUI).
It would be very nice if an open CPU ISA, which can be implemented by everyone, like RISC-V, would be more widespread.
Sadly, RISC-V is not a good candidate for that. It is an excellent ISA for student implementation projects, but not for real work.
RISC-V is praised only by people who have very little or no experience with different CPU ISAs and with the history of their evolution.
RISC-V sucks more than even the ugliest ISA, x86. At least from the huge x86 ISA it is possible to extract a subset composed mostly of more recently introduced instructions, which can be considered as a decent ISA, except for the weird encoding, with many prefix bytes, which is needed for backward compatibility.
The 64-bit ARMv8 ISA is an example of a good ISA, but its future is clouded by the NVIDIA acquisition.
> RISC-V is praised only by people who have very little or no experience with different CPU ISAs and with the history of their evolution.
I'm having trouble actually digging up the reference right now, but I've been told that it was specifically built out of decades of industry experience in what works in building processors/ISAs.
> RISC-V sucks more than even the ugliest ISA, x86.
How so?
> At least from the huge x86 ISA it is possible to extract a subset composed mostly of more recently introduced instructions, which can be considered as a decent ISA, except for the weird encoding, with many prefix bytes, which is needed for backward compatibility.
My big question about this is what performance is going to be like. On the Mac, Rosetta 2 apps seems to take about a 10% performance hit which is impressive. From what I can tell, Windows x86-32 emulation is not nearly as performant.
The big performance gap between Mac and Windows Intel support seems to boil down to translation versus emulation. Since Windows 64 bit support still uses emulation, I expect performance of Intel 64 bit apps is still going to be pretty slow.
One really good side effect of this move is going to be that you can run Android apps with ease on Windows. I wonder if this will make people move towards apps instead of electron apps. Since android apps are made for low power devices, they would run with minimal resource consumption on laptops. They are pretty snappy too.
How would an x64 emulator help with running Android applications on a Windows computer?
Are you maybe mixing this up with Microsoft's recent Android on Windows efforts? These would require an ARM emulator for x86, not the other way around.
GP is implying that PCs will move to ARM and run legacy Windows software on the x64 emulator. Android apps compiled for ARM will therefore run natively.
or we could just, you know, write better software. I'm not going to go on a whole rant about the link between software quality and hardware speed, but I will say this:
If we can write an app that does the job for a low power device, we can write the same app for a high power device and not make any compromises. I wonder why we don't do that. As in, why don't we develop software that runs as fast as it can? Some companies do, and it's really nice to see, and especially game developers are notoriously good at this. Yet, if I want a cross platform chat service, I usually have to put up with code running in a JS interpreter, through a JIT compiler, in a sandbox in a sandbox in a browser, burning CPU cycles so hard my laptop thermally throttles.
Microsoft could manufacture their own hardware providing free Windows updates while charging for services such as Office, Drive, Store, Music, Photos - and their biggest advantage over Mac - Games.
How does this compare to Rosetta II? What's the difference between emulation and translation. Presumably, translation will be much faster? In that case, why did MSFT decide to go about emulation. My understanding, is that this will be slower, but more compatible.
If the Raspberry Pi is running Windows and the games you want to play were compiled for x86_64 or ARM, yes.
The emulation will allow software that was only compiled for x86_64 to run on ARM.
Are they doing any kind of translation, too, or just pure emulation? Because if they aren't, then Windows will continue to run significantly slower on the already significantly lower-performance third-party Arm chips compared to how macOS runs on the M1.
My prediction is 50%-60% of the performance of M1 on the latest/highest-end Qualcomm laptop chip.
Some of the perf gains on the M1 are due to hardware support for TSO (total store ordering), which doesn't exist on the latest/highest-end Qualcomm chip, hence the Windows support needs to insert many more barriers to emulate TSO. This likely has a significant impact on emulated performance.
At least the x86 (32-bit) emulation relied entirely on JIT compiling its emulated code, instead of doing anything AOT. _However_, it cached the JIT'd code so in practice it was a first-run cost, similar to the situation on macOS.
I believe windows elides TSO guarantees when it thinks they aren't necessary. This should usually work out all right–only a small amount of code usually cares very much about memory ordering.
I think the parent comment was worded poorly but this is how I'd look at it.
MacOS with M1 runs software compiled for ARM at 100%.
MacOS with M1 runs translated software via Rosetta 2 at like 95%.
Windows on ARM runs software compiled for ARM at 100%.
Windows on ARM runs x86/x64 emulation at 60%.
If Windows for ARM would replace x86/x64 emulation with binary translation, they might remove a significant barrier to ARM adoption (on Windows) by removing the harsh penalty of having to emulate legacy software until enough native/compiled for ARM software exists.
That doesn't mean Microsoft shouldn't be pro active
That reminds me of an interview with steve jobs, the mass doesn't necessarily know any better, the role of microsoft, apple and other big companies is to show people the impossible is possible
Speed, efficiency, fanless design without perf penalty, are a possibility, M1 is showing that to people
So it was Apple's moves into ARM that finally prompted Microsoft to improve Windows' support for ARM, whereas Microsoft's own forays into ARM were largely ignored and forgotten, even by themselves.
lol no. Windows on ARM has existed for many years now. And running x86-32 apps on ARM has also been around for a while. This is just the public announcement of also supporting x86-64 apps on ARM (AArch64 specifically). This was in the works before Apple announced its move to ARM.