Hacker News new | past | comments | ask | show | jobs | submit login
2020 Mac Mini – Putting Apple Silicon M1 To The Test (anandtech.com)
724 points by kissiel on Nov 17, 2020 | hide | past | favorite | 748 comments



From the article's conclusion... "The M1 undisputedly outperforms the core performance of everything Intel has to offer, and battles it with AMD’s new Zen3, winning some, losing some. And in the mobile space in particular, there doesn’t seem to be an equivalent in either ST or MT performance – at least within the same power budgets."

This is the first in-depth review validating all the hype. Assuming the user experience, Rosetta2 things, first generation pains, kernel panics, are all in-check, it's amazing. At this point I'm mostly interested in the Air's performance with its missing fan.


Unsung hero here is TSMC and their industry-leading 5nm fabrication node. Apple deserves praise for its SOC architecture on the M1 and putting it together, but the manufacturing advantage is worth noting.

Apple is essentially combining the tick (die/node shrink) and tock (microarchitecture) cadences together each year, at least the past 2-3 years. The question, perhaps a moot one, is how much the performance gains can be attributed to either? The implication is that the % improvement due to tick is available to other TSMC customers, such as AMD, Qualcomm, Nvidia, and maybe even Intel.

We'd have to wait until next year (or 2022) once AMD puts Zen4 on 5nm and see an apple-to-apples comparison on the per thread performance. But of course by then Apple will be on TSMC 3nm or beyond...

EDIT: confused myself with tick and tock


Worth mentioning is the insane and disruptive technology making TSMC's 5nm possible. ASML and it's suppliers have built a machine[1] that has a sci-fi feel to it. It took decades to get it all right, from the continous laser source to the projection optics for the extreme ultraviolet light[2]. This allowed photolithography to jump from 193nm light to 13.5nm, very close to x-rays. The CO2 laser powering the EUV light source is made by Trumpf[3].

Edit:More hands-on video from Engadget about EUV at Intels Oregon facility[4]

[1]https://www.asml.com/en/products/euv-lithography-systems/twi... [2]https://youtu.be/f0gMdGrVteI [3]https://www.trumpf.com/en_US/solutions/applications/euv-lith... [4]https://youtu.be/oIiqVrKDtLc


Thanks for the great links / resources. Those machines look insanely complicated. I can just imagine how they get shipped to Taiwan and elsewhere (they apparently cost $120M each in 2010 [1]).

A bit offtopic but I've always found it amusing that a form of lithography, of all things, is fundamentally powering our tech revolution for decades. Especially after a girl I knew learned lithography in an art class, watching her do it in a primitive form, which inspired me to read about it's history in art and professional uses (signage, etc).

That combined with vacuum tubes (which also rank high up there in the revolution thing) are the two things I one day wish to learn how they really work. Not just surface level nodding along.

[1] https://www.eetimes.com/euv-tool-costs-hit-120-million/


That’s amazing - now I have questions.

They say “decades in the making” - when did it first become viable, and then how long to master the process and become confident enough to mass produce consumer goods from it? I’d love to see a timeline w milestones.


Apparently first EUV prototypes were shipped to TSMC for R&D in 2010(!).

ASML has a timeline of the company and technology development[1].

[1]https://www.asml.com/en/company/about-asml/history


Apple has built many second-source partners for cost-reduction for a long time. But most of their CPUs are made by TSMC right now.

I'm wondering will Apple find another semiconductor factory partner (they tried to build A9 by both Samsung and TSMC, but Samsung one seems like has heat issues)or stick with TSMC?


Are there any fabs that can compete with TSMC? Or EUV equipment manufacturers that can compete with ASML?


from density (MT/mm2):

TSMC 5nm: 173.1

Samsung 5nm: 126.5

Intel 10nm: 100.8

Intel 7nm: 202 (estimated [1])

Therefore TSMC is still the best in density right now.

[1] https://en.wikichip.org/wiki/7_nm_lithography_process#Intel


Wonder if Apple would buy TSMC


Taiwan would never let go of their golden goose.


That conclusion is quite misleading, in my opinion.

They write "outperforms the core performance" and the keyword here is "core". What they mean is that if one had a single-core Zen3 and a single-core M1, then the M1 would win some and lose some.

But in the real world, most Zen3 CPUs will have 2x or more cores, thus they'll be 2x to 4x faster.

So what they mean to say is that they praise Apple for having amazing per-core performance. But it kind of sounds as if the M1 had competitive performance overall, which is incorrect.


The Zen3 processor that they are comparing it to is the 5950x - the fastest desktop processor with a TDP of 105W. The entire system power of the M1 mini under load was 28W.

What the article is pointing out is that the mobile low-power version of the M1 (as the mini is really just a laptop in a SFF box) is competitive with the top-end Zen3 chip; the benchmark gap is smaller than 2x.

We don't know yet how far the M1 scales up, e.g. a performance desktop will presumably have a higher TDP and probably trade the integrated GPU space for more CPU cores. But we don't known if/how this will translate into performance gains. Previous incarnations of the Mac Pro have also used multiple CPUs so it is not yet clear if "in the real word, most Zen3 CPUs will have 2x or more cores".


> The Zen3 processor that they are comparing it to is the 5950x - the fastest desktop processor with a TDP of 105W. The entire system power of the M1 mini under load was 28W.

This is a very misleading statement. They primarily only used the 5950X in single-core tests, and in those tests it doesn't come remotely close to 105W. In fact per Anandtech's own results[1] the 5950X CPU core in a single-core load draws around 20w.

Take the M1's 28W under a multi-threaded load, that's going to be somewhere in the neighborhood of 4-5w/core for the big cores probably (single-core was ~10w total, ~6w "active" - figure clocks drop a bit on the multi loads, and then the little cores are almost certainly much less power draw particularly since they are also much, much slower). In multithreaded loads the per-core power draw on a 5950x is around 6w. That's a _much_ closer delta than the "105W TDP vs. ~28W!" would suggest.

M1's definitely got the efficiency lead, but it's also a bit slower and power scales non-linearly. It's an interesting head-to-head, but that 105W TDP number of the 5950X is fairly irrelevant in these tests. That's not really playing a role. Just like it's about as irrelevant as you can get that the 5950X is 4x the big CPU cores, since it was again primarily used in the single-threaded comparisons. Slap 16 of those firestorm cores into a Mac Pro and bam you're at 60w. Let it run at 3.2ghz all-core instead of the 3ghz it appears to now since you've got a big tower cooler and that's 100w (6w/core @ 3.2ghz per the anandtech estimates * 16). That'd be the actual multi-threaded comparison vs. the 5950X if you want to talk about 105W TDP numbers.

Critically though the M1 is definitely not a 10W chip as many people were claiming just a few days ago. You're definitely going to see differences between the Air & 13" MBP as a result.

1: https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...


> This is a very misleading statement. They primarily only used the 5950X in single-core tests, and in those tests it doesn't come remotely close to 105W. In fact per Anandtech's own results[1] the 5950X CPU core in a single-core load draws around 20w.

It would seem that the switching of AMD chips in the various graphs have caused some confusion. I was referring to the "Geekbench 5 Multi-Thread" graph on page 2. This shows a score of 15,726 for the 5950x vs 7715 for the M1. This is about 2x. I do not see any notes that the benchmark is using less cores than the chip has available.

I don't follow your argument for why it is misleading to characterize the 5950x as a 105W TDP in this benchmark. Could you expand a little on why you believe this is misleading? The article that you have linked to shows over 105W of power consumption from 4 cores - 16.

Edit: I put in the wrong page number in the clarification :) Also, I see later in the linked article that the 15726 score is from 16C/32T.


If you're referring to the single time the 5950X's multi-threaded performance was compared then sure, the 105W TDP is fair. But you should also be calling that out, or you're being misleading, as the majority of the 5950X numbers in the article were single-threaded results, and it did not appear in most of the multi-threaded comparisons at all.

But in multi-threaded workloads it also absolutely obliterates the M1. Making that comparison fairly moot (hence why Anandtech didn't really do it). It's pretty expected that the higher-power part is faster, that's not particularly interesting.


It's really not clear what you are trying to argue here. The number of single-threaded benchmarks are irrelevant to this point: when the M1 was compared to the 5950X in a multithreaded comparison:

* The 5950X was 2x faster * The 5950X was using 4x the power (28W system vs 105W+ for the processor). * The M1 only has 4 performance cores, the 5950X has 16.

Even counting the high-efficiency cores as full cores in the comparison has the M1 with 8-cores providing 1/2 the performance of the 5950X with 16-cores, i.e. it implies that the lower performance cores are providing as much as the 5950X cores.

That is certainly not the 5950X obliterating the M1, as the article stated (and was the quote that started this thread) the M1 is giving the 5950X a good run for its money. If you think otherwise could you provide some kind of argument for why you think so?


The 2x number you're claiming was only for geekbench multithreaded, which was the only multithreaded comparison between those two in the Anandtech article. You're trying to make broad sweeping claims from that one data point. That doesn't work.

Take for example the CineBench R23 numbers. The M1 at 7.8k lost to the 15W 4800U in that test (talk about the dangers of a single datapoint!). The 5950X meanwhile puts up numbers in the 25-30k range. That's a hell of a lot more than 2x faster. Similarly in SPECint2017 the M1 @ 8 cores put up a 28.85, whereas the 5950X scores 82.98. Again, a lot more than 2X.

This is all ignoring that 2x faster for 4x the performance is also actually a pretty good return anyway. Pay attention to the power curves on a modern CPU or what for example TSMC states about a node improvement. For 7nm to 5nm for example it was either 30% more efficient or 15% faster. Getting the M1 to be >2x faster is going to be a lot harder than cutting the 5950X's power consumption in half (a mild underclock will do that easy - which is how AMD crams 64 of these into 200W for the Epyc CPUs, after all). But nobody cares about a 65w highly multithreaded CPU, either, that's not a market. Whatever Apple comes up with for the Mac Pro would be the relevant comparison for a 5950X.


You're being obtuse. The only test you're using is Geekbench, which just isn't useful for these kinds of comparisons.

In other multicore benchmarks, the M1 gets beaten by parts with lower TDPs by AMD, and the 5950X has something like 3 to 4+ times more performance.


Calling me obtuse doesn't add anything of value to the discussion. I was pointing out the multithreaded benchmark in response to the claim that there were none. Read kllrnohj's response that is the sibling to your comment to see how to make a point effectively.


That's not the obtuse part. The obtuse part is ignoring all the other multicore tests in the same uArch and then saying that the 5950X is comparing unfavorably and ignoring the fact that single core perf on Geekbench for the 5950X doesn't scale like any of the other tests and is much lower relatively to the other tests, then taking this one test were a 105W TDP is actually used as significant to all the other comparisons, then saying that their are comparing it to a chip with a 105W TDP, when in all multicore tests except two it gets compared to the 4800S (which beats it with half the power consumption).

It's not getting compared, at the scale of the article, to the 5950X in anything but single core performance except for one expection, and the claim that it's being compared generally to a 105W TDP part is also false because in multicore comparisons, where the total TDP makes sense, it's getting compared to parts with half or 150% the TDP and losing.

In reality, it's getting compared to a 6-7w core, and to 15-45w chips.


Yeah I think it is incredibly tiring how everyone said "it's both faster and more energy efficient" when the benchmarks have shown something far more obvious and boring. You can make ARM chips that are just as fast as x86 chips and they will end up consuming roughly the same amount of power during heavy calculations but much less in idle. The fact that ARM is king in idle power consumption isn't a surprise. It's ARM's bread and butter.

All the wishful thinking was wrong but that doesn't mean ARM is doing badly.


You would be better at conveying your point if you could manage it without insults.


I wasn't trying to insult you, I was just trying to say that that interpretation so off that it seemed to me that it came from a biased understanding, which I'm a bit tired off in these threads where people are acting like it's the best thing like sliced bread when it's obviously just another competitive chip.

That being said, I probably should've phrased it differently, I wasn't aware that word had such a connotation in English, in my mother's tongue it means that it's a narrow intepretation


An author who deliberately switches which chip to test in different versions of the same test in order to paint the desired picture isn't much different than one who literally makes up the numbers. The whole article ought to be flagged and deleted.


M1 has a lot of great things about it and I'm excited to see the what it can bring. Intel needs to be humiliated by something great, to remind that they have been crap for a long time.

But... Other than ST performance, the multi-core CPU isn't linear at scaling. At 16cores - core-to-core communications take a hit, that is not as bad as for 4 cores.


> This is a very misleading statement. They primarily only used the 5950X in single-core tests, and in those tests it doesn't come remotely close to 105W.

That’s true but keep in mind this is the power going into the AMD CPU only. The power number measured for the mini was the entire system power pulled from the wall, so that 28W included memory, conversion inefficiencies and everything. That’s crazy.


Actually a significant power, maybe around 20 W, is consumed by the I/O chip, which consumes a lot because it is made in an old process.

In 2021, when the laptop Zen 3 will be introduced, that will have a much better power efficiency, being made all in 7 nm.

Of course, it will still not match the power efficiency of M1, which is due both to its newer 5-nm process and to its lower clock frequency at the same performance.


> which consumes a lot because it is made in an old process.

And also because it's doing a lot. Infinity fabric for the chiplet design isn't cheap, for example. A single-die monolithic design avoids that (which is why that's what AMD did for the Zen2 mobile CPUs).


When we get to the detailed comparisons - it's almost impossible to compare without deconstructing the chips.

In the end it'll be a question of - can Apple scale it without incurring massive costs?


>the fastest desktop processor with a TDP of 105W

TDP is a useless marketing figure. Anand measures the AC power consumption of the Mini, which is a good measure, but that is not comparable against CPU TDP because TDP has a tenuous relation to actual power draw at best [0]. A better comparison would be ARM Mini vs Intel Mini AC power draw, and a similarly spec'd AMD system for good measure. Unfortunately, unless I missed something, the article only measured AC power draw from the ARM Mini.

The M1 is certainly more power efficient than Intel or AMD for the average user, but as far as performance per watt, we cannot make any judgements with the data we have.

[0] https://www.gamersnexus.net/guides/3525-amd-ryzen-tdp-explai...


Not to mention 5950X alone without cooling ($799) costs almost as much as an entry level MacBook Air.


The single core performance between the 5600x and the 5950x isn't significantly different. The charts have some interesting gaps...

Edit: Putting it head to head with the 5600x would make a lot of sense for price/core/desktop space comparison.


Yes. I'd like to see a decent-ish Ryzen APUs such as the 3400G up against one of these as well.

I did notice that the cinebench for the M1 is only about 10% higher than my Ryzen laptop (T495s) which is laughable as it's a 3500U and the whole thing cost me £470 new!


Not to mention 5950X alone without cooling ($799) costs almost as much as an entry level MacBook Air

The M1-based Mac mini starts at $699.


Yeah, forgot about that. Everything else being equal (ostensibly), the M1 Mac Mini is $200 cheaper than the crappy Intel i5 Mac Mini, more if you upgrade the Intel CPU.

As an owner of a decked out 2019 Mac Mini, in hindsight I made a shitty purchase decision.


As an owner of a decked out 2019 Mac Mini, in hindsight I made a shitty purchase decision.

Probably not. If you need a machine to get work done, as you probably did, it always makes sense to buy what's current.

It's different if you can afford to wait for a particular upgrade we know is coming.

I bought a 4k Retina iMac a little over a year ago because I needed to badly and it's been great.


No matter what the purchase, I always force myself to stop comparing for a bit of time after the purchase. By the time I pull the trigger, I have shopped and compared as best I can. Inevitably, as soon as I complete the sale, one of the places I was looking will have lowered the price or release the next-gen.


I bought what I thought was a 2020 Mac Mini in April direct from Apple. The only significant difference on paper was that the base model came with 128GB for the 2018, 256GB for the 2020.

As it turns out, that's true: About This Mac says "Mac mini (2018)" even for the 2020.

I replaced the 8GB base RAM with 32GB of aftermarket and have been thrilled with it. But then I was coming from a 2018 MBP 4-Thunderbolt with only 8GB and the fan noise with it drove me nuts.

I got the i3 because I thought the CPU wasn't the weak point, the RAM was. And so far, for me, that's held up.


Why?

Today's purchase of Mac Mini will be a crappy decision in hindsight in about a year... and that is true every year.

It would have been a crappy decision - if you got a worse product at the time of purchase. So don't get FOMO.


I actually just bought an Intel Mac Mini to run MacOS VMs with using ESXi. I expect it will be quite a while before stable Mac VM support is available for Apple Silicon Macs.


>in hindsight I made a shitty purchase decision.

Yeah you did. Why would you buy something you don't need? It doesn't even matter if the Mac Mini with Apple Silicon existed or if from now on the only computer Apple sold is a Mac Mini.

Okay lets be serious. You bought the x86 Mac Mini because you wanted a x86 Mac Mini, not because you wanted to make perfect purchasing decisions with infinite foresight. A lot of software is broken on the M1 Mac Mini so you made the right decision at that time. It's entirely possible that you would regret buying the M1 Mac Mini.


I sold my 2018 Mac Mini with high specs 1 week before the keynote.

The guy must be feeling bad right now.


The Resulting Fallacy Is Ruining Your Decisions

http://nautil.us/issue/55/trust/the-resulting-fallacy-is-rui...

There’s this word that we use in poker: “resulting.” It’s a really important word. You can think about it as creating too tight a relationship between the quality of the outcome and the quality of the decision. You can’t use outcome quality as a perfect signal of decision quality, not with a small sample size anyway. I mean, certainly, if someone has gotten in 15 car accidents in the last year, I can certainly work backward from the outcome quality to their decision quality. But one accident doesn’t tell me much.


Well that CPU has 16 cores / 32 threads while the M1 has 4 high power cores and 4 low power ones.


I feel like most people here haven't seen the SPEC benchmarks AnandTech performed (and they're partly to blame for that; their UX is awful). But the M1 is toe-to-toe with desktop Ryzen: https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

E: And multi-core SPEC: https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste..., where they're on par with mobile Ryzen.


I looked up multi-core Cinebench R23 and the AMD 2990WX comes in at 33,213 vs. 7,833 was given for the M1 in the article.

Apple markets this as a "Pro" device for professional video editing. That's why I believe it is fair to take their word and compare it against my other options for a professional video editing rig. And in that comparison, which Apple has chosen itself, the M1 comes out woefully inadequate at a mere 24% of the performance.

Of course, for a notebook, the M1 is amazing. But I feel irked that Apple and Anandtech pretend that it's competitive with desktop workstations by having such a misleading conclusion about it being on par with Zen3 - which it clearly isn't.


> Apple markets this as a "Pro" device for professional video editing. That's why I believe it is fair to take their word and compare it against my other options for a professional video editing rig.

That's ridiculous. Threadripper has 8 to 16 times as many cores, runs on hundreds of watts of power and such a CPU alone costs the same as several Mac Minis. Them claiming you can use it for video editing doesn't mean you can expect that a 1.5 pound notebook will measure up to literally the biggest baddest computer you can buy.


He knows it’s ridiculous, but you’re going to see a large group of people who hate macs take this turn of fortune quite poorly. My hope is that it really puts pressure on intel to start firing on all cylinders but who knows? A MacBook Pro 16 with higher clocks and more gpu cores would be a really hard system to not buy.


I wonder what is going on at Intel. A resurgent AMD has more or less surpassed them in COU offerings already, and now so has Apple. They have fallen so far. Can it just be institutional complacency? I don’t get it.


It is ridiculous, that being said on a per-core basis at a similar wattage Zen 3 is equivalent to the M1 chip, but has an order of magnitude more I/O.


> Apple markets this as a "Pro" device for professional video editing.

No they don’t. Claiming something is capable of video editing and marketing it as a video editor are two very different things.

The 3 macs introduced this week are apples lowest end devices, 2 of which still have ‘big brother’ intel versions for sale today.

If you’re truly ‘irked’ that the lowest-end, lowest power, first release devices aren’t comparable in performance to the highest end desktop chips, then you’re putting the wrong stuff in your coffee.


Apple markets this as a "Pro" device for professional video editing

No, they don't. Because Apple keeps raising the ceiling on low-end devices like the 13-inch MacBook Pro, in many aspects, it's more performant than a high-end laptop or desktop Mac from just a few years ago.

Please read the best article so far that explains what "Pro" means for Apple—it just means nicer; it doesn't mean for professionals. https://daringfireball.net/2020/11/one_more_thing_the_m1_mac...:

Wait, wait, wait, you might be saying, the MacBook Pro is pro. But as I’ve written numerous times, pro, in Apple’s product-naming parlance, doesn’t always stand for professional. Much of the time it just means better or nicer. The new M1 13-inch MacBook Pro is pro only in the way, say, AirPods Pro are. This has been true for Apple’s entry-level 13-inch MacBook Pros — the models with only two USB ports — ever since the famed MacBook “Escape” was suggested as a stand-in for the then-still-missing retina MacBook Air four years ago.


You can do toe to toe best of the best speeds and feeds...

But I think the broader strategic outlook is: yes, the M1 loses on a few benchmarks, but the fact that it gets ballpark to some monster rig multiple times in price and power - is this not the whole picture of the Clayton Christensen disruption curve?

The other point is - Apple's Logic and Final Cut software are probably optimized for the M1, and they can likely achieve much of the capabilities of the monster AMD rig for a fraction of the cost/power budget.


That is an absurd comparison. AnandTech clearly mentioned that the M1 was on par in single core, not in multi core.


I am not even sure if you are serious or if you are trolling.

Not only did Apple Not compared a Laptop CPU against a Workstation CPU, Anandtech didn't pretend it to be competitive with Desktop Workstation.


> the AMD 2990WX comes in at

Oh, neat. What kind of battery life do you get on your AMD 2990WX ultralight laptop?


They’ll be 2-4x faster in some multicore tasks. CPU benchmarks specifically break out single core performance as a separate metric, because as of 2020 a lot of everyday work is single core bound (stuff like 3D graphic design, video editing or compiling large codebases not considered “everyday work”).

Not to mention that even in multicore tasks, you don’t usually scale perfectly linearly due to overhead. And also, the biggest Ryzen processors are usually in desktops, and Apple Silicon hasn’t entered that market yet.


For most everyday work - Raspberry Pi is fast enough, so it's not even an argument. Raspberry Pi 8GB is 10x cheaper? There are mini desktops starting at $250 that will do everyday work.

If you throw in "everyday work" - then we have passed the need for new chips altogether.


> For most everyday work - Raspberry Pi is fast enough

That really stretches the meaning of 'everyday work' quite a lot. The pi is dog slow, even compared to an Intel i9 ;)


That's a bit of an overstatement. Booting from SSD instead of SD card has an enormous uptake in performance. I have yet to hear of a Pi 4 that couldn't overclock to 2GHz which is a pure uplift of 25%. Moving to 64-bit PiOS gives another double-digit jump in performance too. Not record-breaking, but not unusable either.


Passively cooled first generation macbook air chip isn't quite as fast as an absolute monster grade PC Ryzen chip on its 3rd generation. Color me shocked.

I think you're just trying your hardest to convince yourself that these chips aren't competitive.


The M1 is only "first generation" because they called it M1 instead of A15. :)


It’s really very similar to the A14 chips.


IIRC the biggest Zen3 mobile CPUs are 8-core. So they'll have at most 2x cores. And that's ignoring the low-power cores on the mac which probably still count for half a core each.

AMD is likely to be faster in multicore overall, but not by much it seems.


There are no announced or released Zen 3 mobile CPUs at this time. You are correct in that the Zen 2 mobile CPUs currently top out at 8 cores, and up to about 54W TDP - the top CPU is the "45W" Ryzen 9 4900H which can be configured up to about 54W by the OEM. We might see Zen 3 mobile early in 2021.


It bodes really well for future chips with higher power budgets. The Pro seems a bit underwhelming for what it could be though.


That new MacBook Pro replaces the low end of the Pro line which had a slower CPU and only 2 ports.

I would expect that, when Apple brings out their next iteration of chips, they would target the higher end of the Pro line with more cores and ports along with higher RAM capacities.


I'm guessing they also want devs of tools like docker to finish porting their software before they switch the rest of the macbooks over.


On performance, ya I agree. Although, they basically doubled the battery life over the previous generation so that alone might be worthwhile for some users.

I think we'll see an additional higher end Macbook Pro 13" when they start to release Apple Silicon models with discrete GPUs.


They only need to add 2 ports to the Pro to differentiate it from the Air.


Single thread performance still matters a lot for personal computer use. It’s not everything, normal people do benefit from some degree of parallelization, but there’s a reason all of the major PC chip designs continue to push single thread performance even as that becomes more difficult. Most end users see more benefit from those improvements than from more cores.


M1 has multiple cores. 2x-4x multiple cores does not necessarily mean 2x-4x faster.


The M1 has a big.LITTLE design with only half of the cores being performance-oriented, so if there is a gap, it would almost always be in Ryzen's favour.


Assuming complete core saturation, yes.

That’s not how CPUs actually operate though, outside of some very narrow tasks. If you actually regularly max out your CPU, you already know that and wouldn’t touch a Mac mini no matter what chip is in it.


Most real world usages don’t max out all cores all the time.


Chrome and 20-50 tabs, and my Intel Macbook can be used as a blow dryer. Assuming Chrome's power needs don't change, it seems that the only way to control for overheating an M1 is going to be throttling down - slowing everything down. Curious how M1 machines feel during day to day usage.


The reviews I saw said that using Chrome gets good better life, but if you want great battery life you need to use something like Safari.

I switched to Safari a few years ago, and I couldn't be happier. Chrome's performance and battery life are atrocious. I only use Chrome when I need something specific from it.


I saw that comment in a couple different places. Presumably Chrome is running through Rosetta 2, whereas Safari is native to the M1. I imagine once Chrome is available natively, performance will be somewhat better, though probably still not as good as Safari.


On the new machines yes, but none if the people you read about before today had AS machines, they were all comparing Chrome and Safari on Intel, so it’s unlikely that stays quiet will change once Chrone is native on AS.


Actually, I think it will. The M1 chip takes a lot smaller proportion of power of the laptop to run (compared to say, the LCD, ssd, etc). If the new macbook air idles at 6W and runs chrome at 10W with no fan, people are barely gonna notice. That’s a big difference compared to an intel machine running chrome at 35+ watts.


I was referring to multiple reviews of new Apple hardware, just to clarify.


IIRC Chrome is a battery hog / memory hog on all platforms.

Am I wrong in this regard?


Afaik it's a memory hog but it doesn't really use more or less battery then other browsers.


so it uses more memory but not more battery?


That's...not a contradiction?


Google today published Chrome 87 with (supposedly) big memory and CPU usage improvements: https://9to5google.com/2020/11/17/chrome-87-mac-windows-stab...

Also they started rolling out the first M1 optimized version of Chrome (but has been pulled since due to stability issues): https://9to5google.com/2020/11/17/chrome-mac-apple-silicon/


Chrome is just awful on a mac. I am not sure why anyone uses it. FF is much nicer to use.


I’ve been using Safari as my daily driver for some time and it’s quite nice to use. Don’t be afraid to give it another chance.


Does Safari have extensions like Ad Blocker and does it have good developer tools?


1. Yes, it does. I use AdBlock Pro. 2. Yes, it does. I've been using Safari as my primary browser as a Rails developer for at least the past decade and have always found the developer tools at least adequate. I don't use the developer tools on other browsers heavily, so I don't know if I might be missing something.


[Edit] I'm wrong about this- "Adblock Pro no longer exists for Safari (in the form of an "official" extension)." It still exists, as "AdBlock Pro for Safari" developed by Crypto, Inc. but was not listed on Apple's extension site for some strange reason: https://apps.apple.com/us/story/id1377753262

The listed adblocker is: "AdBlock for Safari" developed by BETAFISH INC, which offers in-app purchases including "Gold Upgrade" which "unlocks" some basic features that gorhill's uBlock Origin already has for every other browser.

https://help.getadblock.com/support/solutions/articles/60002...

Not switching until there are some better options for this.


I have no trust in an ad blocker extension (which has access to any site you visit) published by an entity that is in the domain of crypto currencies. An adblocker is the best way to hide malware that steals money.


I used to run Safari on my mac and it was the best thing in the world:

- It integrated perfectly with the OS

- It saved battery like heeeeell

- It integrated natively with Airpods and media keys

- It clearly had worse performance than Chrome and a couple of incompatibilities, but it was perfectly acceptable

- I could run most of my extensions, namely uBlock Origin, HTTPS Everywhere and Reddit Enhancement Suite

- The native PiP (before it was on any other browser) was AMAZING

I had been a diehard Chrome user since it came out (with the comic book!) on Windows, Linux and macOS. I got fed up with how slow it was becoming and how it was running my fans all the time.

Unfortunately, two things happened that made me quit Safari:

- I found some weird bug wherein whenever I typed an address in the address bar it would always slow down to a crawl

- Apple deprecated and abandoned old extensions. So I lost most of my very valuable extensions, with emphasis on uBlock Origin and Reddit Enhancement Suite. I could live with a different adblocker (I saw adguard at the time), but I could not live without RES. No way.

So I left Safari and have since moved to Firefox. It seems almost as fast as Chrome, has nice integrations and features, but it's no Safari. It still drains my battery and has issues. Firefox has since progressively added PiP (even if it's not native) and support for media keys, which was a godsend, so that's nice.

I'd like to get back to Safari. It would be amazing. Do you know if there is any way for me to get what I used to have back? uBlock Origin (or something with compatible filter lists and custom rules) and Reddit Enhancement Suite?


Yes, PM me for an invite (its not Safari but is native, Webkit based browser that runs uBlock and other webextensions)


try adguard pls, it also has an iOS version which has almost the same experience on safari


Safari is migrating to a new system of extensions that will make it much easier to port from Chrome. However, I understand it still requires Xcode (which non-Mac folks can't run) and a developer license (which not everyone wants to pay for). I hope to bring my Chrome extension to Safari, but honestly it's not a priority because most people who install extensions are not running Safari (when you consider that most people are not on Mac, and a large chunk of folks on desktop Safari are there because it's the default — and therefore would not likely install extensions).


I use AdGuard for safari (in the Mac app store), which works reasonably well.

It has good standard developer tools, but not the advanced stuff like Redux replay and flexbox inspectors.


I particularly like Chrome profiles. I have a few profiles with their own bookmarks/histories/open tabs/etc. For example, one of my profiles is "Shopping". Another is "Work" and yet another is "Social Media".

Context switching profiles at a macro level - as opposed to intermingling work/shopping/social - is beneficial to me.

When I switch over to "Shopping", I have my tabs on whatever purchase I'm researching open. I can drop the whole project for a few weeks and resume it later right where I left off. None of it can bleed over into my "Work" profile. I like the separation. Helps keep my head clear.


Firefox has something like this called containers. The best example is one for facebook, where any call to any facebook servers only works in the facebook container. It has similar setups as well, Work, Home, Commerce, etc.


I switched from Chrome to FF as my daily driver, and miss being able to have multiple simultaneous instances with different proxy configs (via a --proxy-server="socks4://localhost:####" command line flag).

FF as far as I know does not have a way to do this as easily, you have to spin up different profiles and click through each one to configure it.

I still have chromium around for primarily this reason.


OTOH I was not able to stop Chromium from leaking DNS requests when using socks5. Only in FF I could make it happen.


Foxyproxy extension will help you there. You can also configure automatic proxy switching based on many conditions.


Not quite the same. I want to set-up and tear-down entire macro groups of windows and tabs while keeping others active.

Opening my 'Shopping' profile brings up windows and tabs from where I left off. Same with "Social". When I don't want distractions, I just close those profiles. No notifications, no updates, etc. I like the separation.


Simple Tab Groups [0] + Multi-Account Containers [1] are my workflow for that exact case. Simple Tab Groups hides the tabs based on the group you're in and the Multi Account Containers can keep them segmented from a contextual standpoint.

I can't stand Chrome either and so I've been using these two together for about a year now I believe. Using a naked version of Chrome is jarring given my browser feels like it fits how I use it being setup like this.

[0] https://addons.mozilla.org/en-US/firefox/addon/simple-tab-gr... [1] https://addons.mozilla.org/en-US/firefox/addon/multi-account...


Thanks I'll look into it.


I don't use Chrome, so I don't know what Chrome profiles are like. But Firefox also has profiles. Launch it with the -P option to open the profile manager and create additional profiles, besides the default one. Each profile is an entirely separate browser state: settings, tabs, cookies, storage, cache, etc. You can use them simultaneously. (This has existed for as long as I can remember... since 0.9 and probably back to Netscape?)


Firefox also has profiles, though they're not a very prominent feature and are a bit less polished as a result.

    firefox --ProfileManager


You can also type about:profiles in the address bar and launch a new profile from there.


There are many extensions which implement workspaces in FF. You can do exactly the same thing + FF containers give you separation for cookies, etc.


Yep, would love to use Safari but profiles are crucial for services you have multiple logins for (such as work and personal email).


Is that a Chrome for Mac feature? Never seen it before. Care to elavorate?


I use Profiles on Mac and Linux. Don't think Profiles work on Chromebooks - haven't really explored this.

https://www.makeuseof.com/tag/custom-chrome-browser-profiles...


Speed mostly, though the last time I tried out Firefox seriously was over a year ago, it was noticeably much slower on pages (ab)using lots of javascript.


Does Edge Chromium for MacOS have the same awfulness?


I have a 2017 MBP (base, no TB) and found that Chrome made my fan rev like crazy. A friend told me about Brave and I tried it out. Now my fan only kicks on when I'm doing serious work. I know some folks don't like Brave for various reasons, but I love it because my MBP is almost always silent.


It occurs to me that we can run the iOS version of Chrome on the macbook too. And iOS version of Chrome is a wrapper around webkit IIRC.


That and opening Zoom seem to push my 15" over thermal reboot.


I can't find that quote or even the words "undisputedly" or "Zen3" in the article. Was it changed or, if it wasn't, can you give me a pointer, please?


AnandTech split some articles onto multiple pages. The print view gets you the whole article on one page, so I rather prefer it: https://www.anandtech.com/print/16252/mac-mini-apple-m1-test...


Wow, I never knew about the print view. It's way more readable, and the lack of comments section makes it quite fast to load despite the very long page.

The tiny drop-down menu in the default view is very hard to discover and quite annoying to click on (many other review sites, like Phoronix, have similar annoying drop-downs).


As I remember, they charged a membership fee for being able to download the whole article as a PDF.

It seemed somewhat reasonable that an article that would be passed around the department or on to your boss would require a fee.

I can't find any mention of it on their website, though. Am I getting my websites confused or did they drop it altogether?


You're thinking of Ars Technica, I think.


Yes, you're right. Thank you. And they're still doing offering PDF's only to members. Which I don't have a problem with.


It's insane how bad the website's UX is for first time visitors not seeing there's more content behind the first page.


Wow, indeed! I thought the conclusion part in the article was weird. Now I see why - I was at the end of the first page!


That's how websites used to fit more ads into an article before constantly-updating Javascript ads became the trend.


Yes, but with a big huge button "NEXT PAGE" or something like that. Look at the number of comments here that didn't even notice there's more content other than the 1st page.


The article has multiple pages. You can find the conclusion here: https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


When I share an article from Anandtech I always use the "print" URL which has all content on one page https://www.anandtech.com/print/16252/mac-mini-apple-m1-test...


It's on page 7


You have to jump to the article's conclusion.


A jump to conclusions, if you will.


Of course, we'd already know that if only we had a jump to conclusions mat!

It's a shame it wasn't a commercial success.


You have to use the dropdown to select the right page. As they said, it's in the conclusion of the article.


It’s not super technical, but Engadget did some benchmarking of the Air. Similarly impressive

https://www.engadget.com/apple-macbook-air-m1-review-1400313...


As for kernel panics, with iOS likely sharing most if not all of it's kernel code with macOS I'd be surprised if Apple hasn't had an iPhone macOS build since before they released the first iPhone.


iPhoneOS was basically Mac OS X.


On the air now. Launched Kerbal Space Program without a problem, loading mods now..


> At this point I'm mostly interested in the Air's performance with its missing fan.

I think the clear store here is that the Air will definitely be slower than the rest over time. This isn't a 10W SoC, clearly, so it definitely can't run at its best while being passively cooled.

How it behaves when throttled will be interesting for sure though.


It looks like it only begins to throttle after 8.5 minutes of sustained high loads — like exporting 4K video.

In a lot of day-to-day work that is much more peaks and valleys, you may never see the throttling.


Seeing reddit reports of gaming throttling more quickly than that, which would make sense since the GPU is going to sit at or near 100% pretty easily with a game and you'll still be seeing a decent CPU load.


The macbook air is probably the worst machine you can think of for gaming. Even without the M1 you won't have a great time. Its the perfect machine for students because it can last all day in a browser and word.


link?



Yeah me too. I expect that there will be another release sometime next year with updated "package". It is cool that they put it in old Air box, but I think I can wait for better camera, overall package. By that time it will be clearer all the benefits and issues that come with it.

I wouldn't mind plastic edition of MacBook /w M1. Aluminum and metal overall, not the best for everyday use. I prefer "warmer" feel of plastic, like ThinkPad for example.


I suspect the aluminium frame is a key part of the passive thermal cooling. I'd be very surprised if we see a plastic version.


I do miss the plastic ones myself. My sweat dissolves macbooks and they give me a rash.


lol, clean your computer.


Performance per watt of this thing is INSANE, take a look at the battery left after compiling WebKit, compared to older Mac:

https://techcrunch.com/wp-content/uploads/2020/11/WebKit-Com...

https://techcrunch.com/2020/11/17/yeah-apples-m1-macbook-pro...

IMO the all day battery will be THE killer feature of the new laptops.


The takeaway quote I've been sending people from the TC article is:

> And, most impressively, the M1 MacBook Pro used [just] 17% of the battery to output an 81GB 8k render. The 13” MacBook Pro could not even finish this render on one battery charge.


It's insane, I've been playing games where on the previous Macs it would go full jet engine, but now it doesn't even heat up.


These gains are just through improvements in processors. Now, combine that with parallel leaps for battery tech... It's also good that battery tech hasn't jumped too far ahead already - the constraint has helped in the development in these mobile chips. The display panel is still the biggest energy draw on my laptop. There's a lot of chance for improvement there too. A 2 day battery in 5 years isn't that crazy.


Apple is the kind of company that will choose to slim down the laptop instead of making it 2 day battery life.


I think that's the right call, honestly.

I need to recharge my own wetware at least once a day. So there is a nearly guaranteed several hour idle period where I'm not using any technology and where laptops and phones can be recharging too.

I don't see much end user value in not taking advantage of that.

It's like if you parked your car literally at a gas station every night anyway. Would you really care about a fuel tank that could let you drive for more than 24 hours?


I would want a 2, 3, even 7 day battery. Why not? I don't want to bother about charging every day.

But my worry is that with more and more efficiency in computing and battery tech, if Apple instead decides to reduce the battery capacity? Especially with no competition in sight, or competitors trying to follow Apple, we may end up with smaller batteries instead of more battery life. They seems to be doing that with iPhone. Also 18 hour battery life can quickly degrade to few hours if some processes are spinning the CPU continuously which can happen knowingly or unknowingly with several apps open (Docker occasionally does that to my MBP).

With several days battery life, I can even go for short travels without even bothering about how to charge it.


The problem with multi day battery life is you never get in to a routine and you get caught out every time. I had a pebble watch which had a 10 day battery life and almost every 10th day it would go flat on me mid day or I'd get a warning about low battery while I'm outside and then forget about it when I get home.

Now I have an Apple Watch that lasts almost but not quite 2 days, I charge it every night and it has never gone flat on me and I find it no hassle since I take it off before bed anyway so I just drop it on the charger.


Kindles suffer this problem. The battery lasts forever so you never get in the habit of charging them. Then you get caught out and it's at 0% and you can't read your book. Too much battery capacity is surprisingly harmful.


This is precisely why Kindles and similar devices should (a) use battery chemistries that are power dense so that they can charge quickly, and (b) come with a wireless charging mat that you can set up on an end table wherever you like to read (or where you put all your stuff when you walk in the door).


The only device that I don't have this issue with is my logitech mouse which gets 70 days on a single charge. It shows a red light on the side when its at about 2 days charge left and I usually remember to plug it in when I leave for lunch.


If your mouse is compatible with the Logitech powerplay charging mousepad, you never need to plug it in again.


External battery for cases when you need it, ultralight for cases when you need it. The ancient carpet the world vs wear slippers debate.


Especially now that thanks to USB-C, it's easier than ever to augment laptop batteries. For $60 you can grab a 100Wh RavPower battery pack that can double the lifespan of a MacBook.


Aside from being more attentive with charge, another consequence of smaller battery is that you get more cycles per a given amount of total battery usage, and shallower cycles per a fixed interval of battery usage. All else being equal, a larger battery may be able to live longer.

Personally I would like a larger battery because that means i can leave my charger at home, or even use a weaker USB-C PD on the go just to reduce the drain rate.

The problem with the fuel tank analogy is that liquid gasoline is really heavy and your energy efficiency would drop dramatically, due to lugging around an extra 80 gallons of gasoline. That's not necessarily the case with a larger battery. You still have to carry the additional battery material, but the marginal costs IMO are nowhere near as high.


It is really only charge my AirPods Pro every week or so. I much prefer having a bulkier case with battery I can forget about to a smaller case but more frequent need for charging.

It’s actually revolutionary for how I use a computing product. If i had to charge them everyday I assume they’d be dead 1/2 the time because it’s annoying to have yet another thing to plug in.


Yeah, I do this with my Tesla, and it’s a game changer: when I need to drive, my “gas tank” is always full and will take me anywhere I need to go for 90%+ of the driving I do regularly.


I think you're missing a world of possibility: your laptop having plenty of power to spare to charge all the peripherals.


I fully concur having the same annoying wetware limitations


I'm not sure I or really anyone actually wants a 2 day battery life. Like, we can do that on smartphones right now but users have signaled that the 1 day device is fine for them, notably because of the human gap.

You know the gap.

If you charge your phone every night it becomes a habit tied to your daily routine.

If you were to charge your phone every other night, you might lose track of what day you are on, not charge it and then the perceived battery life experience is worse. This is why smart watches with 3-4 days of battery have not prevailed over those with one heavy day of battery. They are annoying to know what day you are on so you might just charge it every night and if you do, the platform is trading off so much power that the experience is worse.

Plus, then you have to carry 2 days worth of battery or have half the power envelope as a laptop with one day. the concept all sounds great but the reality of people using things really has honed in on the fact that these things need to fit into habit and use cases that make sense.


Why are you assuming you need to charge a 2-day device every other day? You charge it every night, and in exchange you make it through heavy use days, late nights, and the times you forget to charge it. I had a 2-day phone and downgraded to a 1-day phone and my phone now dies on me much more often, including in each of those scenarios, and looking at the battery level and charging have become a bigger part of my life.


I think it's implied in "two day battery life".

If that's not what you actually want then just call it 'heavy use all day battery life' or something.


> I'm not sure I or really anyone actually wants a 2 day battery life.

I do, because that means it could probably do 8 hours at high load.


My Garmin lasts about a week if I don't use GPS and it's by far the best feature.


...maybe?

The Air form factor is already pushing the limit of what you can do with aluminium and still have high confidence it won't warp when you shove it into your bag, or have it fall over when you open the screen past vertical.

I could see them slimming down the battery to get more components in, maybe, rather than two day battery life.

Which would probably be the right call. There just aren't enough circumstances where not plugging in your laptop while sleeping is necessary to justify it.

Personally I'd like them to make a model with a cellular modem, with all-day battery life even reasonably far from a tower. That would be fun.


I'd expect carbon fiber reinforcement to add stiffness soon on some of these larger, ultra-slim devices. That's what we use for sports equipment, and CF+Al bonding is a well solved problem in manufacture.


Apple just increased battery life on the MacBook Pro from ~12 hours to 20 hours and kept the form factor the same. Likewise the MacBook Air. Feedback from people using them suggests gains are even bigger for people who use them on battery under load.


But they decreased the batter size on the iPhone 12 due to power efficiency gains, so battery life is approximately the same unfortunately.


Apple is the kind of company that will choose to slim down the laptop instead of making it 2 day battery life.

Hopefully the Era of Ive is over. There are promising signs around, but I'm not ready to believe it yet.


Ive doesn't unilaterally decide how thin the laptop is.

It's a joint decision made by Hardware Engineering, Product Management, Design, Operations etc.

And frankly everyone wants thin laptops just with top tier performance which it looks like we will get.


I believe some of Apple's laptops have gotten thicker since Ive left.

https://www.cnbc.com/2019/11/13/apple-is-finally-willing-to-...


It gets 18 hours battery life. For most people, that is 2 day battery life.


Do you think that's still true in 2020? Seems like thinness hasn't been a major consideration for a couple of years now.


They're doing it again with the new iPhone. All of this year iPhone have a smaller battery.


And they all added hardware features to fill that space. Look at the iFixit teardowns and you'll see it wasn't to make the phones thinner. It was to make room for wireless charging, LIDAR, and other features.


As much as I dislike them doing it while disregarding other aspects, we need/want slimmer. A notebook PC should be as thin as a real notebook or maybe even less. And it should be sturdy enough, after all, a 5-6 mm plate of metal is quite robust.


Why? I could understand making tablets thinner since they can be used like a paper notebook, but laptops don't work anything like a paper.


Especially with 2 in 1 laptops, the line is very blurry.


Apple is the kind of company that will choose to slim down the laptop and improve the performance AND increase the battery life.

Like they just fucking did with the M1 MacBooks.


Yeah, I’m not saying it’s a bad thing. They know the battery range they need and keep it there, to make room for other features.


Apple is slowly moving to mini led displays and probably micro led displays after that. That should also improve energy use.


Currently miniLED for backlight is power hungry compared to simple edge LED. It's used for high contrast HDR. MicroLED won't available for laptop near the future.


Imagine the day we have high-refresh, color e-ink displays to pair with high performance ARM chips.


will it? do people really spend that much time away from an outlet while on their laptop?


My school had a policy of no chargers at school due to safety regulations (Fires/tripping) so the macbook air made an excellent choice.


Wow! That would drive me batty! It also seems unfair to students who can’t afford the latest-and-greatest laptops and are stuck with laptops with lesser battery life (unless the laptops are all school provided).


This was a private school and the macbooks were school supplied (from your fees of course)


no charger at school but you're expected to use a computer to learn..

That's kind of weird.


Well they provided us with macbook airs which easily last an entire day of usage unplugged.


All day battery and very light laptop will change people’s behavior. Previously they might not bring the laptop out as much because of power constraint.


Yes. Even at home it's so much nicer to be able take the laptop anywhere without worrying about wires.


This metric is hilarious and perfect.


Wow it must suck to be Intel right now.

It wasn't so long ago that the trope was while others had better multi-core performance "...Intel still holds the lead for single core performance"

Now not only do AMD have a better product, but also Apple now offer equal or better performance than the best that Intel can offer.

I wonder what is next for Intel now their £1000+ CPUs are firmly in third place. Looking forward to some new innovation and competitive (inc pricing!) products from them.


Don't forget to add that Apple is now doing this on their version of a "budget" laptop that has no active cooling, that gets 18-20 hours of battery life, that runs emulated x86 code with almost no performance hit and is a 1st gen product.

I don't think any of these details can be understated. Even AMD's 1st gen Ryzen kind of sucked and look where that is now.


> Don't forget to add that Apple is now doing this on their version of a "budget" laptop that has no active cooling

The Anandtech tests were on an actively-cooled Mac Mini and the power draw numbers they were observing were far outside of what can be passively cooled in a laptop. You'd need to wait for Air-specific results before drawing too many conclusions on how it performs.


AnandTech isn't the only one providing benchmarks, they are rolling in from all over the place now. People are running 15 minute finale cut pro jobs and the fan isn't kick in on the macbook pro.


Any word on if Final Cut and Logic X are recompiled for Arm ?


FCP 10.5 dropped on 11/2:

• Improved performance and efficiency on Mac computers with Apple silicon

• Accelerated machine learning analysis for Smart Conform using the Apple Neural Engine on Mac computers with Apple silicon

Discussion: https://forums.macrumors.com/threads/apple-updates-final-cut...


Thanks , my M1 macbook pro gets here tommorow so we'll see what happens


They said in the keynote that Logic had major improvements under arm as well. I can't remember if it's actually shipping yet.


I installed updates a few days ago and the release notes say "support for Apple Silicon"...


If they are Apple they are Universal apps I believe?


That's the same question being asked - have they been recompiled as universal apps?


Based on the benchmarks and the fact for Apple it’s only a recompile as they have been planning this I’m guessing yes. But unconfirmed.


> "budget" laptop

At this price it's more expensive than 80% of best-selling laptops, so not quite budget. If you compare in price to Dell for example, they only compete with their XPS line, which is their high-end one.

Apple only does high-end products, which is fine but doesn't make that model cheap.


> Apple only does high-end products

I know this gets repeated often, but this is simply not true. Apple _does_ make high-end products, and they market themselves as a high-end brand, but Apple has always filled as many market segments as they can. There are plenty examples that prove this statement wrong: iPod Shuffle, iPhone SE, the $250 iPad. They never do deep discounts on their products though, so when they age or go stale they are far overpriced; and they do _not_ make value or budget models.


That why I put the word budget in quotes. It's the cheapest laptop they make even though it isn't all that cheap.


To look at these CPUs a different way, it’s fairly competitive with Ryzen processors that cost $600-700 alone, except that will buy the whole Mac Mini.


This is really the 12th generation of Apple's own chips, though - and the third of this particular design, if I recall correctly.


When I say first gen product, I mean the whole product, not just the chip used. It would be a very different situation if we were talking about an upgraded iPad with a new chip. This is a platform defining moment.


Are you claiming that A1 is in the same category as M1?

If so... then Intel's latest chip generations should be traced back to 8088 in 1979.


Intel CPUs _literally_ boot pretending to be 8088s [0]

[0] https://en.m.wikipedia.org/wiki/Real_mode


that's exactly how people describe Intel lineup

And architectural similarities between their first 14nm chip and their last 10nm chips are as m1 is similar to a12z at least, may be even their first 64bit


There was never an A1; the first Apple-designed SoC was the A4 that shipped in iPad.


At some point one wonders if Intel will just cede the desktop and enthusiast markets to AMD and/or Apple and just focus on server and high-end computing? As an IBMer this feels familiar for some reason...


No reason why servers will always stick with Intel.

Amazon already has their own Graviton ARM chips - And that's EC2, cloud native workflows might have already migrated.


Not to mention HPC, with the top three supercomputers being Power & ARM.


Intel isn't doing to well in HPC either. The top intel system is now at #6, and the vast majority of the performance in inside the matrix-2000 accelerator. The highest pure Intel system is at #9.

Even with high end servers Intel seems to be losing to AMD Epyc.


Amd 2qe on the market for decades practically as second in the performance tier. I'm not sure why Intel who should ostensibly have lower unit costs would abandon a market for a possibly temporary situation one or two deign nodes away might now be able an issue.


What would keep AMD or Nvidia from eventually eating Intel's lunch in the server market as well?


Nothing, they'd just die a lot slower


Not a chance, the consumer market it massive. Even among PC enthusiasts, AMD is in the minority. It's not even close: https://store.steampowered.com/hwsurvey

I'd sooner expect Intel to start making their own ARM chips to compete with Apple.


Steam marketshare is slow to change because it includes a lot of people with older PCs. Look at new sales and the picture is very different: https://imgur.com/a/yEKDpd2


Thanks for the link! Didn't know steam made that kind of analysis public.

But I'd take a closer look at those numbers: in 5 months intel has lost 2.5 points that AMD has gained. Doing some stupid, atrocious math of just taking the average point gain over those 5 months (and not accounting for the fact that my pc enthusiast friends are stating that their next machine will be AMD), that puts November of 2023 that they are 50% market share. That gives Intel very little time to pivot.


Not long ago at all, like just a few weeks ago right before zen3 was out in the wild! A double whammy for sure for Intel, tough times ahead indeed. Apologists can hand wave AMD off by citing the huge lead in sales that Intel still enjoys, but that argument falls flat with Apple, a trillion dollar company. Maybe Intel will start to compete on price like AMD used to.


I think they announced a short while back that they're "looking at trying to outsource manufacturing of some high end parts", ie they've known that they were falling behind in too many areas due to their shrinkage problems so they're taking in help from the outside to not become irrelevant.

M1 is running on "5nm", looking at specs Intel 10NM is 100Mtr/mm2 vs TSMC's Apple 5nm chips being 173Mtr/mm2 (So even if Intel nomenclature seems more conservative they still lag by a lot in manufacturing capacity)


Yes, even if Intel moves to 7nm, it's at ~202Mtr/mm2. Current TSMC 5nm is between Intel's 10 and 7nm density wise.


As someone who owns both Mac and PC, I am excited on what Ryzen can offer on 5nm.

If Apple has these gains, I am sure Ryzen will have great performance leaps too.


I’m not massively familiar with CPU architectures, would you expect to see similar performance gains going to 5nm on x86 as you do on ARM?


Not strictly because of 5nm itself.

5nm will be Zen 4 which should bring 10-20% IPC uplift if AMD's current trend continues.

TSMC's N5 5nm transistors are 85% smaller than their N7 transistors which should lower power consumption significantly though SRAM only shrinks a modest 35% (this especially affects desktop Ryzen with tons of cache compared to their laptop versions).

AMD currently makes the Zen 2/3 IO die on Global Foundries 12nm for contractual reasons. When they finally shrink that to 7 or 5nm, the power savings should be significant.

Zen 4 is expected to bring DDR5 support which will both drastically increase bandwidth and lower RAM power consumption. Likewise, it is expected to support PCIe 5 which doubles the bandwidth per lane to a little shy of 4GB/s.

All of these things together could mean a decent improvement in IPC and total performance and a very big improvement in performance per watt.

Meanwhile, I suspect we'll start seeing large "Infinity Cache" additions to their APUs that is shared between the CPU and GPU as the bus width of DDR just doesn't offer the bandwidth to keep larger GPUs from fighting the CPU for bandwidth. This should not only improve APU total performance, but fewer trips to RAM has a significant effect on power consumption (it costs more to move 2 bytes than to add them together).


Not really. AMD could make a 7nm version of the Apple core, but they instead build more cores. Much how AMD outmaneuvered intel with smaller chiplets (more flexible in design, higher volume, more tolerant of process errors, higher yield, etc). Apple has done similar with their design. It's better in obvious ways, larger caches, larger number of rename registers, more outstanding transactions, more memory channels, etc. Apple could make a core just as fast, maybe slightly less power efficient if it spent less on the neural cache, image processing, or GPU.

Another big win is that apple runs the memory at 4500 MHz, standard, without overclocking. Even the Zen 3 often runs the ram at 3200 MHz, and standard support goes up to 3800 MHz or so. You can run it higher, but then you have to decouple the memory clock from the CPU clock, which reduces performance. The DDR4x also supports 2 channels, instead of 1. So you get as many memory channels as the AMD threadripper, which is an expensive, hot, and low volume chip.


This happens to every giant eventually (and to countries or civilizations). They get climb to the top, and then they hold such dominant positions that they aren't forced to try. They get lazy or sloppy (and in Intel's case, I'm not suggesting the engineers were the sloppy ones... more likely strategic decisions from management and quarterly earnings per share-focused execs). Eventually they are dethroned, and some never return to power.

Intel will never go away, but they definitely will become laggards for the foreseeable future. In their industry it takes years or even a decade to see the fruits of your effort.


> In their industry it takes years or even a decade to see the fruits of your effort.

So how long has Apple been working on this chip?


Roughly 10 years on this particular processor line: https://en.wikipedia.org/wiki/Apple-designed_processors#A_se... (According to Anandtech, the M1 is a rough equivalent to the A14)


Why, exactly? As long as Apple keeps their chips to themselves, Intel or AMD will have nothing to worry about.


What if Dell, HP, Lenovo and Microsoft get together with AMD or someone like Samsung and start knocking out ARM, or even RISC-V SOC based machine that compete with Apple? Apple doing this could go to proving that ARM is viable on the desktop/laptop. Microsoft have not succeeded in the past with an ARM based platform, but this could change that and refocus their effort.


Because Apple (theoretically at least) should start increasing their market share.

Don't assume that they only downside for Intel is losing Apple as a customer... That could end up being the least of their worries.


> Because Apple (theoretically at least) should start increasing their market share.

Should that be a serious goal for Apple though? Is there that much more money for them if they jump into the race-to-the-bottom budget market, where I assume much of the remaining share is? It seems there is some added value in being a luxury product.


If Apple weren't constantly trying to increase their market share, I imagine their shareholders would like to have a word with them :-)

I get what you're saying though. I don't think they should go after the budget PC market. There's still lots of room for growth at the mid to high end. There's also servers.


The goal for the investors isn't to increase market share though, it's to increase profit. I'm questioning the assumption that market share and profit are directly, and linearly, related. The average smartphone price worldwide is around $300 [1]. I don't have access to the full report at that link, but with the graphs shown, some significant portion must be below that price. My naive assumption is that market share in the top end is the most important, with a movement into the lower end eventually leading to the destruction of the perception of quality that they seem to work hard for, and operating costs that would cut into profits. \shrug\

1. https://www.statista.com/statistics/934471/smartphone-shipme...


>The goal for the investors isn't to increase market share though, it's to increase profit.

Right, but since the wholesale cost of their phones isn't likely to change much at this point, increasing marketshare is the most obvious path to increased profit. Also, with Apple increasingly focused on services revenue, getting more customers into its ecosystem makes perfect sense.

I agree they likely won't go after the low end. This site [0] claims Apple only owns 52% of the high end market so there's lots of room for growth at the mid and high end.

Anyways, since we were comparing Apple to Intel/AMD I assumed we were talking about PC's.

[0] https://www.gizchina.com/2019/12/08/apple-still-holds-first-...


> Anyways, since we were comparing Apple to Intel/AMD I assumed we were talking about PC's.

Oh jeez, that's embarrassing. I went off on a tangent without realizing it.

For the PC market, I absolutely agree!


They could gain more of the premium market. Or they could just stick an” year old a14 and make a slightly lower end 799 macbook in search of more market. They don’t necessarily need to start making 299 cheapo laptops to gain market share.


Market share x86 overall (mobile + desktop + server), AMD vs Intel:

Q3/2020 20,2% vs. 79,8%

Q2/2020 19,7% vs. 80,3%

Q1/2020 17,5% vs. 82,5%

I don't know why the OEM business works that way, but it is very slow to shift, so Intel still has time. Self-built consumer PCs for gaming are already overwhelmingly AMD though.


Discounts and design lead time.

Unlike modular desktops - you can't just drop in an AMD CPU into a laptop chassis and expect everything to work.


That's the consequence of resting on your laurels and getting complacent. They relied too much on being the large incumbent, and they reap what they sow.


They reaped billions in profits. The issue is that organizations can't turn it on and off based on competition, once you are rich and lazy, the organization fills up with coasters and before they know it they no longer have a higher gear. Remains to be seen if Intel can come back, but I doubt it under their current leadership.


Their biggest problems came from their foundry, which they definitely weren’t resting on.


1. This is roughly a 20-25W SoC. Apple could easily scale this to 50W, or a M1"X" with Double of everything, ( likely not with the Neural Processing Unit and the Image Signal Processing ).

2. That would give you double the performance in MultiCore Benchmarks, and Double the Graphics.

3. They will need to double the memory transfer as well, so it will either be a Quad Channel LPDDR4X or may be going with LPDDR5.

4. This hypothetical chip could be coming to MacBook 16" next year.

5. It is the nature of Chip and Devices that we are fundamentally limited by Heat Dissipation. I call this TDP Computing.

6. That is why in many, if not literally every explanation under every graph they will note the TDP difference and you should get the correct perspective or what is being measured.

7. That means you should not expect a 10W / 25W chip to out perform a 32 Core 250W Chip in MultiCore Benchmarks. You are basically comparing Apple to orange. And I dont know why there are many comments in this thread doing it.

8. The M1, and SPEC scores ( no longer are we relying on Geekbench ) are to showcase what Apple is capable of.

Edit: I just deleted a massive Rant specific to HN comments on Hardware.


> I dont know why there are many comments in this thread doing it.

(Points at AnandTech) But he started it! :)


Impressive results from Apple, and another well-deserved kick in the teeth for Intel after years of stagnation. The coming decade is going to finally see some interesting developments in the CPU market again.


Right, I think in the end, this is going to show just how bad monopolies (or near-monopolies) can be for innovation. These are super impressive results, just hoping that the rest of Apple (software, developer relations) can turn away from the draconian future they are currently heading.


There's a fair risk of this turning into new monopolies, but at least it should take a while and hopefully other companies will figure out how to meaningfully compete

I guess the fact that Apple will probably never sell this stuff for non-Apple computers will allow AMD to keep competing on the PC side.

Clearly when Apple come out with a desktop class chip though, it's going to be hard for anyone else to come close.

Somehow that doesn't seem to be a problem for Android though!


The technology is mostly done by TSMC and ARM. Apple just modifies the cores and slaps them together.


What do you think this means for people who don't like Apple's business model? I don't want to be left behind on outdated hardware just because I'm a Linux user.


Apple isn't the only ARM vendor. Nuvia, Ampere and even Qualcomm will start shipping good ARM chips hopefully soon


For people to young to remember or know, the original "Benchmark Wars" were between Intel and Motorola with NEC, Oki, and Hitachi occasionally getting a punch in there.

I worked at Intel at the time and the 80286 (x86) architecture was going head to head against the 68000 (68k) architecture. The marketing was intense with Motorola consistently using benchmarks that benefited from linear memory access and Intel using benchmarks that benefited from branching and floating point. This was when Intel made a compiler that recognized it was compiling the 'Dhrystone' benchmark and substituted custom hand assembled code for the output of the compiler.

Watching Intel and AMD compete was entertaining because Intel was competing against its own ISA. It added new instructions, AMD created a 64 bit extension, both worked some interesting improvements in memory handling an cache handling.

Adding Apple's take on ARM to the mix is a lot of fun for me. It is like reading a new novel in the Asimov Foundation series or maybe a fourth volume in the Lord of the Rings trilogy. I am really glad they are pushing the edge of the envelope here, it is the kind of technology that made me get into computers in the first place.


Dave2D found the air to be on par with the Pro, at least for tasks that took under ~8.5 minutes. It only really throttled after that point, according to him.


Here's one data point: a WebKit compile took 25min on the Air vs 20min on the Mini/Pro. That 25min is still a bit faster than the Intel 16-inch Pro, which took 27min and waaay faster than Intel 13-inch Pro at 46min.

The crazy thing is that both M1 MacBooks still had 91% battery left after the compile, vs 61% on the 16-inch Pro and 24% on the 13-inch Intel Pro.

[1] https://techcrunch.com/2020/11/17/yeah-apples-m1-macbook-pro... -> "Compiling WebKit"


Is this a 1-1 comparison? If the ARM compile is compiling to ARM binaries then there might be less work/optimizations since it is a newer architecture. Seems like a test with two variables that changed. Would be interesting to see them both cross-compile to their respective opposite archs.


Maybe not, but A) it's close-- most of the work of compiling is not microarchitecture-level optimizations or emitting code, and B) if you're a developer, even if some of the advantage is being on an architecture that it's easier to emit code for... that's still a benefit you realize.

It's worth noting that cross-compiling is definitely harder in many ways, because you can't always evaluate constant expressions easily at compile-time in the same way your runtime code will, etc, too, and have to jump through hoops.


As someone who knows relatively little about this, I'm very curious why this is downvoted. It seems like a rebuttal would be enlightening.


Hm my experience was that compiling C on arm was always super fast compared to x86, because the latter had much more work to do.


This doesn't align with my experience. Clang is about the same, but GCC often seems much slower emitting cross-ARM code.

  jar% time x86_64-linux-gnu-gcc --std=c99 -O3 -c insgps14state.c -I inc -I ../../shared/api
  x86_64-linux-gnu-gcc --std=c99 -O3 -c insgps14state.c -I inc -I   0.97s user 0.02s system 99% cpu 0.992 total
  jar% time x86_64-linux-gnu-gcc --std=c99 -O3 -c insgps14state.c -I inc -I ../../shared/api
  x86_64-linux-gnu-gcc --std=c99 -O3 -c insgps14state.c -I inc -I   0.93s user 0.03s system 99% cpu 0.965 total
  jar% time x86_64-linux-gnu-gcc --std=c99 -O3 -c insgps14state.c -I inc -I ../../shared/api
  x86_64-linux-gnu-gcc --std=c99 -O3 -c insgps14state.c -I inc -I   0.94s user 0.01s system 99% cpu 0.947 total
  jar% time x86_64-linux-gnu-gcc --std=c99 -O3 -c insgps14state.c -I inc -I ../../shared/api
  x86_64-linux-gnu-gcc --std=c99 -O3 -c insgps14state.c -I inc -I   0.92s user 0.04s system 99% cpu 0.955 total

  jar% time arm-linux-gnueabihf-gcc --std=c99 -O3 -c insgps14state.c -I inc -I ../../shared/api
  arm-linux-gnueabihf-gcc --std=c99 -O3 -c insgps14state.c -I inc -I   1.43s user 0.03s system 99% cpu 1.458 total
  jar% time arm-linux-gnueabihf-gcc --std=c99 -O3 -c insgps14state.c -I inc -I ../../shared/api
  arm-linux-gnueabihf-gcc --std=c99 -O3 -c insgps14state.c -I inc -I   1.46s user 0.03s system 99% cpu 1.486 total
  jar% time arm-linux-gnueabihf-gcc --std=c99 -O3 -c insgps14state.c -I inc -I ../../shared/api
  arm-linux-gnueabihf-gcc --std=c99 -O3 -c insgps14state.c -I inc -I   1.55s user 0.04s system 99% cpu 1.587 total
  jar% time arm-linux-gnueabihf-gcc --std=c99 -O3 -c insgps14state.c -I inc -I ../../shared/api
  arm-linux-gnueabihf-gcc --std=c99 -O3 -c insgps14state.c -I inc -I   1.44s user 0.03s system 99% cpu 1.471 total


That’s interesting. I was not cross compiling so maybe the arm system I was using was just faster.


So cross compile for RISC-V, POWER, or something else would be fair?


Apple has been optimizing the compiler for a decade for iOS.


If everything else is the same, that seems like a solid reason to prefer the ARM architecture even setting aside 1:1 comparisons. Isn't faster compilation and execution the whole point of a faster processor?


The assertion is that compilation might be faster since there are fewer optimizations, and therefore runtime would be slower.


This is insane perf/watt. x86 backwards compatibility may have gotten us to where we are, but it's certainly holding it back. Arm is looking great, and maybe it's time for x86 to die.


Honest question. Does the ISA, as a language, really matter? Or is it more a by product of who owns the ISA, eg intel sucks, arm is more liberally licensed.

I used to work at intel, and no one I knew there thought ISA mattered at all. That’s just a few people though, so I’m curious if people think there’s something better or worse about the different ISAs as a technology in their own right, or if it’s more about the business interests behind them that matters.


It really doesn't. AMD has essentially the same perf/watt coming in a few months. ISA doesn't change anything nowadays because it all gets decoded into a per-CPU specific actual instruction set anyways.


Exactly right. With today's transistor budgets, the x86 ISA decoder/translator is just noise.

This is not the difference between x86 and ARM -- it's the difference between Intel's team and Apple's (also AMD's). You don't see Qualcomm being competitive even though they also use ARM.


This is not correct actually. Simpler ISA often requires bigger instruction caches, but consumes less energy because of simpler decoding logic. VLIW theoretically can be super efficient, because it discards decoding stage altogether.


In theory, yes. This is the case when you have small cores, which is why a lot of GPUs used VLIW.

But in practice the whole decoder stage is basically a rounding error because cores got so big.


What does AMD have coming in a few months?


Zen 3 on laptops. So instead of Zen 2 on 7nm, laptops should get Zen 3 on 5nm, which is both a 10% uArch clock increase, a 15+% IPC increase, and a die shrink.

Basically, laptop chips that should be around 35% faster and use less power.


It's time for Intel and x86 to die.

But I would also be a little wary, because ARM systems are way more locked down than x86 systems today.


Why does Intel need to die? Sure they're not exactly the company they used to be, but would it be enough for them to just move away from x86? I'm just thinking I don't want just one or two or three companies doing procs.


Intel stagnated and at the same time started implementing some rather anti-consumer practices. This allowed AMD to take the performance lead off them with their latest generation of products. It’s fantastic that the market for processors is so competitive. I’ve grown to not like Intel very much recently, but I’m glad they’re here. They’ll keep the pressure on for further innovation, so AMD will either need to keep up or be overtaken again. Either of which is a good outcome for consumers.


They don't need to die, but if they don't begin to compete they simple will die.


Resource allocation.

Intel dying would free up resources for development by other companies.


Absolutely not. We need more competition because the #1 reason we got to this situation is mono culture and a single platform (x86). We need Apple to succeed of creating an alternative ARM based desktop/laptop platform and for more competition we could add in Mips64 from China to the mix. I am really hoping that by 2025 we are going to have 3 major platforms available for end customers, so that there is real competition.


And where are those chips going to be made? The issue with Intel's dominance is it's complete dominance on the supply side as well.

You fail to realize that this isn't like 3D printing, or other low volume manufacturing. You can't just setup a 100nm Si lithography lab in your spare room and churn out RISC-V chips.

In 5 years - realistically we will have a few high performance(non-mobile) ARM chips manufactured at economic scale. Any other type of disruption would require Intel and AMD to fail and relinquish the supply side capacity... or China investing billions into new chipmaking facilities now.(it takes a few years to build that capacity)


China already has 14nm online, and should have 7nm in a year or two, so that means that we will probably see some real RISC-V chips from there soon, if sanctions continue.

So I think that we will have a four way competition between Intel, AMD, Apple, and Chinese RISC-V chips.

That being said, I don't see x86 dying, I think AMD and eventually Intel when they wake up will be competitive.


When you say they "should have 7nm in a year or two," are you just banking on them copying or stealing a European-made EUV machine?

China cannot be competitive at the razor's edge if its semiconductor companies depend on promptly copying/stealing technology that European, Taiwanese, and American companies bring to market.


7nm doesn't require EUV. Intel has 10nm which is equivalent to TSMC 10nm without EUV, altough it's not that great.

SMIC has already produced some 7nm chips without EUV.

As for EUV for the further future, there has been quite a bit of research in that domain for many years in pre-emption of this, and while I think they will be a node or a node and a half behind for a while, they will almost certainly have one ready eventually. Of course, that will be accelerated by stealing data on EUV machines, or maybe buying a used EUV machine from someone and reverse-engineering it.


I don't see X86 dying either, I think it will be dominant in the desktop/laptop segment for a long time. I am not sure why Longsoon uses Mips64 over RISC-V. Is RISC-V generally available and ready for prime times?


I think Longsoon still uses MIPS64 because of institutional knowledge. It's moreso Alibaba and HiSilicon that I think are promising, and they both seem to be getting on the RISC-V train.


That is a great question. I am not familiar with how much the production of these CPUs are dependent on ASML, TSMC etc. I think think China is kind of forced to have its own supply chain after the Obama era ban on Intel chips in Chinese supercomputers.

https://www.theregister.com/2015/04/10/us_intel_china_ban/


How much innovation actually comes from China, versus just being stolen by China?

The lithography companies actually have to talk about the measures they take to stop China from stealing their IP on their earnings calls.

China is a manufacturing hub, but its (often government backed) chip companies run low-margin businesses that don't make enough money to invest heavily in R&D. Go look at Apple or Qualcomm's gross margins and compare them to Huawei or Xiaomi.


Isn't having a whole bunch of different processor architectures at the same time kind of bad for end-users?


This really depends. Once-upon-a-time, at least in the UNIX (tm) world, there were a plethora of ISAs, and this was the environment where ideas like Java really made sense. Write once, run anywhere.

Most OSes are still pretty well situated to handle this. Java remains, and is easily cross platform. I can run Java-based proprietary games like Minecraft on my POWER9 desktop, despite no-one involved probably ever considering ppc64le a valid target.

The CLR on Windows is also pretty easily cross-platform, although it won't help legacy x86 PE executables. Apple has solved this for ages on the tooling side, encouraging production of "fat" binaries with many arches since OS X was NeXT, and your .app packages needed to run on x86 + m68k + SPARC + PA-RISC.

Emulators like Rosetta (and qemu's usermode emulation) can fill the gap of legacy executables, while these other technologies can make the end-user experience good. Of course, that's only if a) someone writes your platform's equivalent of Rosetta, and b) developers write crossplatform apps.

So, the answer depends on how cynical or optimistic you are :-)


The experience in Linux distros is that extra arches surface bugs that other arches paper over, leading to higher quality software. For example unaligned memory access is slower on some arches but causes crashes on other ones.


It's time for tick-tock to die and Moore's Law to stop being the guiding light of Intel management.

Instead, they should set up two groups: one to generate new architectures for desktop and server, and another to take the best features of those architectures and make them thermally efficient for use in laptops. The development of these two products should be unconstrained by time, because as we have seen, impossible deadlines delay the possible.

In the past 10 years, most of the chips that amaze me have simply done what was already possible, just with enough thermal efficiency that they can be placed in mobile devices.

Intel dying would be horrible for the world. They have so much institutional knowledge...


Is some of this because of those processor level flaws/exploits where the fixes resulted in disabling some processor commands making them slower and less efficient

With only a completely new/different architecture getting those advances back?


And that backwards compatibility may not even be necessary, given Rosetta's performance. Sure Apple is using lots of tricks, but if Microsoft or any Linux project could get even somewhat close...


Based off of what happened with Rosetta1, I don't think devs should count on Rosetta2 being around forever


Just as testimony, it probably doesn't mean much, but bakcwards compatibility you either have it or you don't, there's no middle ground

Apple is one of the most capitalistic companies out there, they want you to buy new stuff and they'll try everything they can to force users to upgrade sooner or later

The story is this: a friend of mine is a well respected illustrator and he has been a long time Mac user (at least since I remember)

Few days ago he asked me advices about a new laptop and he asked for a PC because "new Mac OS will not work with my Photoshop version"

He owns a license for Photoshop 6, payed for it and has no need to uograde, especially to the new subscription based licensing

MacOS Sierra doesn't even work with Photoshop CS6

The only option he had to keep using something he owned was to switch platform (Adobe allows platform change upon request)

End of story.

Backwards compatibility has no value until you need it.

Just like an ambulance or a pacemaker.


> He owns a license for Photoshop 6

Uh, I'm guessing you mean CS6 rather than Photoshop 6, the program that came out in 2000.

In any case, Adobe's help page[1] currently reads, "As Creative Suite 6 is no longer sold or supported, platform or language exchanges are not available for it." Since they're certainly not selling or supporting versions older than CS6, it's unlikely your friend is going be able to keep Photoshop CS6 by buying a new PC laptop. (And he sure as hell ain't gonna be able to get a copy of Photoshop 6 to run on Windows 10.)

> Apple is one of the most capitalistic companies out there, they want you to buy new stuff and they'll try everything they can to force users to upgrade sooner or later

That's not wrong, but s/Apple/Adobe and the sentiment is still true. I suppose he'll save money if he gets a cheaper-than-Apple PC laptop, but I don't think he's gonna avoid paying for Creative Cloud.

[1] https://helpx.adobe.com/x-productkb/policy-pricing/exchange-...


CS6 runs just fine on Windows 10. Of course it's not supported by Adobe as they were pretty aggressive in canceling CS6 licenses if one mistakenly accepted CC with the same account before in order to put everybody onto their extortion scheme, but I use CS6 as before just fine on PC, not on Mac.


Right -- what I was commenting on was that if you have the Mac version of CS6, you can no longer "cross-grade" to the PC version (or vice-versa).


I'm talking about Photoshop 6

That's why I said "MacOS Sierra can't even run CS6"

Technically in Italy if you bought a license and the manufacturer won't support it anymore, you can use it on another platform even downloading an illegal copy.

As long as you have the original license.

That's the same reason why you can listen to mp3s if you own the original record, you have the right to keep a copy and the right to use it even if the manufacturer stop supporting it, because you bought it in perpetuity when you bought the product

That's why I stay away from the new licenses that give you none of those rights

And that's why backwards compatibility sometimes is what drives people choices


> I'm talking about Photoshop 6

I'll take your word for it, but it kind of changes the picture here. Photoshop 6 was released in 2000. That version wasn't released for OS X. In fact, Photoshop 6 was still compiled for PowerPC CPUs. The thing wasn't even fully "carbonized" until version 7, so it would have had to run in the "Classic" environment -- which hasn't been supported on Macs since OS X 10.4.

Maybe you think it's unreasonable for Apple to not support a program made for an operating system they haven't shipped a new version in 18 years for a CPU they haven't shipped in a computer for 15 years. I'm not sure I agree.

> Technically in Italy if you bought a license and the manufacturer won't support it anymore, you can use it on another platform even downloading an illegal copy.

The legality isn't the issue, the "Photoshop 6 is literally two decades old" is the issue. :) It may be possible to run the Windows version on Windows 10, but I can almost guarantee there will be strange, quirky issues that neither Microsoft nor Adobe will be interested in helping with.


> In fact, Photoshop 6 was still compiled for PowerPC CPUs

It's the license that counts.

> Maybe you think it's unreasonable for Apple to not support a program made for an operating system

No, I don't think that.

Apple doesn't have good backward compatibility, especially compared to Windows.

That is my point.

But of course they are free to not support what they think it's not worth it.

It's not a something against Apple.

> the "Photoshop 6 is literally two decades old" is the issue

True, but why is it a problem?

Does the software need to be new to work?

I think that if something still works after 20 years the authors did a great job.

We need to start thinking of software like infrastructure.

We don't rebuild a bridge after 6 months because a new material or technique has been invented.

Or at least as tools, considering them something that lasts, potentially forever.

Most of the problem we'll be facing in the future will be about digital rot, we'll deal with data that we cannot read in any way.

Apple, Adobe, and their idea of disposable working tools are helping it, nor prevent it.

Of course one cannot support everything forever, Windows lost the ability to run DOS binaries years ago and virtualization can help, the problem is companies like Adobe not selling their licenses anymore.

Recently I had to work on a SOAP client after almost 15 years from the last one.

I remembered there was a good XML editor at the time, that did a good job.

One caveat is that it is Windows only and I run Linux, so I checked on WineHQ and found out that the version 2003 works perfectly.

I go to the software's web site, there is a "download older versions" button, I think "great!" and proceed to the download.

The software installs perfectly on Wine but when I launch it there is no option to start it in trial mode, you have to either use a pre-existing license or ask for a trial one.

I clicked the second and soon after an email warns me that that product is not supported anymore and even if I had a regular license, the servers that check the licenses are not online anymore.

So why put a download button there then?

These are the kind of things that software should avoid at any cost, in my opinion.

They've lost a customer, I would have bought an old license at the price of a new one if I could chck that everything that I needed to do worked as intended, instead I downloaded SopaUI which is inferior, but free and functioning.

In this case, the solution could have been virtualization, but you have to pay for a Windows license as well, which was not necessary in the first place.

In the case of macOS virtualization is not even an option, because you can't legally run it on a VM outside of Apple HW.

For some people, that is a big problem, not because they think Apple is bad, but because they don't care who supplies the infrastructure as long as it works.

There are people installing XP on new HW to keep using their old software.

It is doable, but on macOS you can't count on it, every time they change architecture something gets lost forever.

As I said before, nobody value backward compatibility until they need it.

And when you need it and it works it's much more satisfying than when you need it and you are asked to upgrade or be on your own.


> He owns a license for Photoshop 6, payed for it and has no need to uograde, especially to the new subscription based licensing

Sounds like the friend has a need to upgrade, and that upgrade is going to require new software. I don’t think this situation is Adobe or Apple’s fault, old stuff stops working at some point.


> I don’t think this situation is Adobe or Apple’s fault, old stuff stops working at some point.

Old stuff stops working due to deliberate design choices made on both Apple and Adobe's parts. Apple deliberately stripped Rosetta and 32-bit support from macOS, and Adobe is deliberately making it nearly impossible to use older versions of the CS suite on their end.

Meanwhile, I can run Photoshop 6 on Windows or WINE, and I can still run binaries that were statically compiled for Linux 20 years ago today.


You can probably run Photoshop 6 under SheepShaver. I can (and have) run DOS programs from the 1990s in DOSBox on my Mac.

I appreciate backwards compatibility, but I'm not convinced drawing lines in the sand every once so often is a terrible idea. Revisiting old software is fun for nostalgic reasons and, sure, there are sometimes edge cases where you have to use something that hasn't been updated in years, but in general I'd rather be using software that exhibits at least minimal signs of being an ongoing concern.


The hardware, which is not the main tool in his craft

He draws by hand on paper and the final preparation on Photoshop is for printing

After almost 10 years he needed a new laptop (things wear out with time and he could not install more RAM) but not a new Photoshop version with a different and more costly license

The need to upgrade software is an artificial one and it's only needed because some platforms don't have a good backwards compatibility

Windows does

For many people the OS doesn't make any difference, as long as they can keep using the tools they already know

There is a limit on the improvements a new software will provide if your workflow is already good as it is and you already paid for the version that works for you

I know many small businesses that still use Office 2003

They can install it on new hardware on new Windows versions, it's simply not possible to do the same on Mac

It's not better or worse, backwards compatibility it's a feature and as any other feature some people value it a lot, some don't care at all


That CS6 issue was a major faux pas and a reason why many people stay with Mojave or are forced to use VMs.


I've used Photoshop CS6 on both Sierra and High Sierra. It's ever-so-slightly more crash-prone than on older OS's, but totally usable.

It launches on Mojave as well, so I'm pretty sure it works, but I haven't personally used it for any length of time. Catalina is what killed it.

IMO, backwards compatibility in OSX/macOS was perfectly decent for a long time. Most software compiled for Intel that wasn't doing something weird continued to chug on, frequently with significant glitches but not to the point where the software was unusable. Then in Catalina Apple just gave up or something.


It's odd isn't it because if they invested a little bit in Catalina and Rosetta they could probably have had a great backwards compatibility story even in a few years time - but it's just not in the DNA I guess.


In Catalina, Apple dropped 32 bit support. And in the same process dropped a lot of Frameworks that had been deprecated for ages. 64 bit software that didn't rely on deprecated Frameworks continue to function


Isn't that the meaning of breaking backwards compatibility?


Photoshop 6, not CS6


The GP said:

> MacOS Sierra doesn't even work with Photoshop CS6

I'm not sure where they got that impression, but it definitely works!


I got it from Adobe Web site

> Mac OS X v10.6.8 or v10.7. Adobe Creative Suite 3, 4, 5, CS5.5, and CS6 applications support Mac OS X v10.8 or v10.9 when installed on Intel-based system

They work, maybe, they are not supported though

It means that if it doesn't work, Adobe won't provide any support


I wouldn't expect Adobe to support CS6 in 2020 on my 10.9 system either. What matters is whether the software works or not—which it does on Mojave, and on Windows 10.


you are not wrong, but Photoshop 5, released in 1998, works on Windows 10 because Microsoft made it possible

CS6 works officially from XP SP3 (2008) to the end of Windows 8 (2015)

It works unofficially on XP pre SP3 (2001) and on windows 10, almost 20 years later and it's guaranteed to work on the LTSC for another 8 years (last LTSC is from 2018)

CS6 on Mac is supported on systems that span from 2011 (OS X 10.6.8) to 2014 (when Yosemite came out)

On May 2020 Adobe updated the release notes on CS6 saying that "If you are running Microsoft Windows XP with Service Pack 3, Photoshop will run in both 32-bit and 64-bit editions. However, Adobe does not officially support the 64-bit edition and you may run into problems."

So they are still supporting it on Windows XP on their official channels.

Most of the problems with old applications in Windows come from installers using ancient techniques to detect the OS version

Most of recent Adobe software theoretically could also run on older windows versions (8 or 7 for example), but are not supporting old platforms anymore with the new subscription versions and recebtly dropped support for the LTSC versions of Windows 10, so probably keeping the old versions around is a smart move if they work well enough for you

People who bought licenses for old versions should be in their right to use them as long as they can

Which simply is for longer on Windows than on MacOS


> Apple is one of the most capitalistic companies out there, they want you to buy new stuff and they'll try everything they can to force users to upgrade sooner or later

The more charitable view is that by not being wedded to backwards compatibility they can make their ecosystem stronger, faster.

See https://medium.learningbyshipping.com/apples-relentless-stra... for some discussion of those tradeoffs.


How is that anything less than mind blowing?

Twice as fast, using 1/10th the battery life.... and that’s for a part that costs Apple $70 instead of, what, $400?


Can you imagine how frustrating it must have been at Apple knowing what you had and having to deal with intel’s crap over the last year or two.


The interesting information after which time the air was throttled and how much performance is lost when throttling.


Several tests seem to show it throttling after the 8.5-9 minute mark.


I watched and read multiple reviews and Dave2D seems to be the only one who tried to quantify the throttling to some extent, all the others only had useless statements like "The Pro will probably be able to sustain unthrottled workloads for longer thanks to it's active cooling" - No shit, sherlock. For me the fact that it only throttles after 8-9 minutes (!) of heavy use is going to be the deciding factor that will allow me to go with the Air (and actual physical function keys) over the Pro, so thanks Dave.


Somebody on twitter reported that during Rust compilation the Air started throttling a bit (20-30% hit) after 3-4mn. The Pro doesn't throttle.


Couldn't the Pro just turn on its fans after 8-9 minutes (to avoid throttling), thus giving the best of both worlds?


Best of both of worlds is:

- active cooling

- lack of a touchbar


I've been wondering if someone could make an active cooling dock for the Mac Book Air. I was even thinking the M1 wattage is low enough that you could have a thermoelectric cooler lowering case temp down to room temp.


I mean if you're desperate to get it to compile in 20 minutes instead of 25 for a particular occasion, you could just grab a bag of peas from the freezer and set the laptop on them.



That would be the Mac Mini.

But seriously, I share your opinion on laptop keyboards: regular function keys please.


The touchbar is pretty great if you program it yourself using, e.g., BetterTouchTool. I especially love the clipboard widget - works fantastic with VIM/EVIL.


That's exactly what it does.

But 8-9 minutes of full 100% CPU is a relatively rare occurrence for the vast majority of users. Developers might occasionally do that, but it will be very language and project dependent.


I assume it does. My 13" 2016 MBP doesn't turn on its fans much unless it's busy.


My 16“ MBP is running its fans basically all day (iOS dev work and ARQ backups)


Yeah, I think the 45W laptops always run them, even if sometimes very slowly. The smaller laptops have been able to turn them off completely for a while, though, when not very busy.


Having the experience (or, love/hate relationship - so awesomely thin and quiet, so underpowered) with 12" Macbook, one surprise is that throttle time really depends on environment temperature and GPU use.

In a cool room it can last few minutes before throttling, while outside on a warm day it throttles almost instantly.

Also, a thermal budget is shared with GPU, so once you plug-in the external display, or start Sidecar, you run out of thermal headspace pretty much instantly.

I'd love to see these two factors tested.


That's one data point that's particularly interesting to me. As someone who (normally) travels a great deal, I'd probably go with the Air unless there were real throttling compromises, especially given that I use a different computer for multimedia editing at home.


Wish they re-introduce the discontinued Macbook 12 inch with the same specs as air. It weighed only 970grams vs 1.29 kg for air. In fact air feels bulky compared to other light weight laptops like LG Gram, not to mention the design is outdated. Always wondered why Apple killed the smaller model. Perhaps they want to push the iPad pro so killed off the netbook line. The wannabe traveller inside me keep drooling at 12 inch whenever i see it in someones hands. It feels so light and compact. With new M1 silicon, it's the ideal time to bring it back. I would grab it without any thought.


The 12” MacBook could not be updated to newer Intel chipsets due to thermal issues. The single port was also a limitation. Once Apple upgraded the Air to retina, a large part of the market for the 12” was lost. They were too close to each other and cross-competed except for the super portable use cases which was not large enough.

This model of Air is obviously a transition product with new guts in an old shell. I suspect that as Apple introduce fully redesigned, second generation Apple Silicon products, you might see something that is closer to the 12” MacBook.


I'm also hoping for thinner bezels as the current models' ones are just huge compared to Dell's XPS line for example. It's slowly becoming obvious that the design has barely changed since 2016 or so. The 16" model was a step in the right direction, but it's still not even close to what Dell is delivering.

It'd be amazing if they managed to squeeze a 13" screen into the old 12" form factor - you'd still get great battery life thanks to the M1.


It seems like Apple is capable of it—look at the bezels on a new iPhone or iPad. But it would certainly require a whole new shell, which probably takes a while for Apple to design and ramp up because of all the machining involved.


They should probably release an 11" version of the air, I'm not sure a 12" having the same specs as the Air would be viable.

However the interesting part would be what are they gonna do with their iPad Pro line at this point I don't see a reason for it not to run Big Sur or the Bigger Sur they'll release next year and compete directly with the surface.

What I see Apple doing is the following:

iPhone/iPad non-pro continuing to use A series SoCs and run iOS

iPad Pro migrate to M series SoCs and become what is essentially Apple's Surface Pro

Macbook Air 13" and 11" (possibly drop to a single 12" model) with M series SoCs this essentially will be the Surface Laptop/Book competitor

Macbook Pro's will continue as they are 13" and 16" models, if Apple goes for 11" and 13" MBAs they might move the MBP to 14" and 16".

Without discrete GPUs and essentially no way to "upgrade" the CPU to a higher model I don't really see the MBP 13" being viable in the long term tbh, I think they'll need a model that will differentiate it much more from the MBA and unless Apple starts binning their future M series SoCs much more in line with Intel and AMD I don't see them having too much of a range here for upgrades.

So alternatively I also see them dropping the 13" MBP altogether and having only a 15" or 16" on whilst the Air will occupy the smaller form factors.


Convergence can be overrated. Arguably Apple finally made tablets mainstream because they didn't feel the same need to maintain compatibility with their desktop/laptop line that others did.

But it's hard not to see some sort of convergence between mobile, laptops, and desktops over time.


They are doing convergence now with allowing iOS apps on Macs I can definitely see the iPad Pro line being moved closer to MacOS from a UI perspective, especially since the pen now works on all iPads.


I'm definitely part of the target market (well, depending upon my mood) for a <13" laptop for travel. I've never been able to make an iPad-based workflow work for me. If nothing else I spend too much time with my laptop on my, well, lap and nothing with a removable keyboard works for me.

Based on the data I've seen so far, I'm not sure why they even did a with fan Pro variant. Even if the market for an 11-12" model is smaller I'm not sure why they didn't do that instead. I was sure that was going to be the reason they didn't refresh their 12" Intel system.


> Based on the data I've seen so far, I'm not sure why they even did a with fan Pro variant.

The ‘pro’ variant released was the low-end 13, aka the 2 port, formerly the ‘macbook escape’. The 13” line has been bifurcated since 2016, with this one firmly lower-spec’d and powered.

It’s very likely that the ‘4 port’, or high-end 13” pro will make more use of the active cooling, so it was likely worth it to develop the new laptop with it.


The 12 inch is still my favorite MacBook experience, having owned pretty much every form factor since pre-unibody white plastic. Can't wait to see what they can do in that hyper minimal portable niche with Apple Silicon.


haven't used it, but can feel it. you are making me want it more.. wish Steve was alive, he would have perhaps kept it alive at least for bragging rights as smallest, lightest laptop on planet. Still remember Steve jobs introducing air inside an office envelope.


I’m also surprised they didn’t bring that back with an M1. Here’s to hoping it will be released next year to balance out the higher end 16” pro and whatever others come out next yet.

I had the 12” MacBook for a couple years and the form factor was amazing. I backpacked around the world with it. But it was so underpowered, it was barely useful. I found myself using my phone more and more because it was less frustrating. I would love to see what an M1-powered 12” MacBook could be like!


That would be a great device to also include a touchscreen in a mac for the first time... after all macOS is getting more and more touch-capable UI and got iOS app support. :)

But like you said, likely would eat into the iPad market - on the other side, as long as they don't make it a 2-in-1, the iPad should still have more than enough reason to exist.


There are several rumours about a return of the 12-inch in 2021H1.


So.. if I place it on a slab of ice, it would work the same as Pro?

Tbh, the only reason I didn't even think of buying a pro is because I don't want the touch bar. I might still buy an air if there's no touch bar on the pro, but the decision will be a lot harder.


Both currently released M1 Pros have the touch bar, so it looks like your decision will be quite simple!


The Cinebench R23 results seem kinda weird to me. The 5950X would have almost a 40 % clock speed advantage over the M1 (~5 GHz 1C vs 3.2 GHz 1C), yet the M1 is only about 8 % slower in an entirely ALU-limited SIMD benchmark? This suggests the M1 core has like 50 % more EUs and achieves much higher throughput than Zen 3.

The SPEC results are... decisive to say the least. Without Zen 3, x86 CPUs would look, well... like shit. All Intel offerings, including the Sunny Cove part (so not a 7 year old uarch), look uniformly bad across all workloads.


You overestimate how ALU limited Cinebench is.

I managed to find an AVX vs AVX off benchmark run for Cinebench r20 [1]. Going from 128bit SSE to 256bit AVX and doubling the ALUs only results in a 10-12% increase in performance.

I assume this has to do with how each SIMD lane of calculation might need to branch independently, limiting the performance speedup from just throwing wider SIMD ALUs at it.

[1] https://www.techpowerup.com/forums/threads/post-your-cineben...


ELI5 please, how this change things for M1


There is more than one way to scale. Over the last decade, Intel had been pushing wider SIMD.

Instead of making your cpu able to execute more instructions per cycle, why don't you make each instruction do more work. SSE packs four floats/ints or two doubles/longs into a single 128bit register and then you can do the same ALU operation to each lane.

It works great on certain workloads.

With AVX, Intel increased the size of these registers to 256bit (eight floats) in 2011 and are currently pushing AVX512 doubles the width again (16 floats).

Apple, and ARM in general are limited to 128bit vector registers (though they are plans to increase that in the future)

Cinebench is well known as a benchmark which takes advantage of the 256bit AVX registers, and some people have speculated that Apple's M1 might be at a significant disadvantage because of this, with just half the ALU thoughput.

But these numbers show that while cinebench gets a notable boost from AVX, it's not as large as you might think (at least on this workload), allowing the M1's IPC advantage to shine though.


I'd note that both arguments have merit.

A SIMD is basically controller + ALUs. A wider SIMD gives a better calculation to controller ratio. Fewer instructions decreases pressure on the entire front-end (decoder, caches, reordering complexity, etc). This is more efficient overall if fully utilized.

The downsides are that wide units can affect core clockspeeds (slowing down non-SIMD code too), programmers must optimize their code to use wider and wider units, and some code simply can't use execution units wider than a certain amount.

Since x86 wants to decrease decode at all costs (it's very expensive), this approach makes a lot of sense to push for. If you're doing math on large matrices, then the extra efficiency will make a lot of sense (this is why AVX512 was basically left to workstation and HPC chips).

Apple's approach gambles that they can overcome the inefficiencies with higher utilization. Their decode penalty isn't as high which is the key to their strategy. They have literally twice the decode width of x86 (8-wide vs 4-wide -- things get murky with x86 combined instructions, but I believe those are somewhat less common today).

In that same matrix code, they'll have (theoretically) 4x as many instructions for the same work as AVX512 (2x vs AVX2, so we'd expect to see the x86 approach pay off here. In more typical consumer applications, code is more likely to use intermittent vectors of short width. If the full x86 SIMD can't be used, then the rest is just transistors and power wasted (a very likely reason why AMD still hasn't gone wider than AVX2).

To keep peak utilization, M1 has a massive instruction window (a bit less than 2x the size as Intel and close to 3x the size of AMD at present). This allows it to look far ahead for SIMD instructions to execute and should help offset the difference in the total number of instructions in SIMD-heavy code too.

Now, there's a caveat here with SVE. Scalable vector extensions allow the programmer to give a single instruction along with the execution width. The implementation will then have the choice of using a smaller SIMD and executing a lot or a wider SIMD and executing fewer cycles. The M1 has 4 floating point SIMD units that are supposedly identical (except that one has some extra hardware for things like division). They could be allowing these units to gang together into one big SIMD if the vector is wide enough to require it. This is quite a bit closer to the best of both worlds (still have multiple controllers, but lose all the extra instruction pressure).


I agree, both approaches have merit.

But at this point I really have to question what code gets decent speedups with AVX512 that wouldn't preform even better on a GPGPU.


ELI5: Some things that ARM can do in one instruction, can be done with multiple instructions on x86


That's absolutely not the conclusion here.

One person thought that benchmarks were saying that the M1 had strong SIMD performance, but the reality is that cinebench (and in fact most renderers) doesn't use SIMD very effectively when looking at the whole process, and the assumption that it demonstrates SIMD performance is not correct.


Intel's CPUs have been getting remarkably worse for a long time now.

They trail the software improvements. To give you an anecdote - I got a ThinkPad T430s and it made my work feel 10x faster(Java EE development in 2012). I got my next ThinPad P51s in 2017 - an it was just one huge disappointment. It felt like Intel was stepping back. I now have ThinkPad P1 and computing power is still just OK, though still better than the U class i7 in P51s.

I'll be happy to knock Apple for marketing BS("3 times faster", etc). But Intel has shown that they just need to crumble. I hope that my next laptop is not using Intel's ISA or cores.


>I got my next ThinPad P51s in 2017 - an it was just one huge disappointment. It felt like Intel was stepping back.

the microcode bugs cut performance by 20-30% varrying in your workloads. and it comes with a 4k display? That would also contribute to a performance loss, depending on what you're doing.

That 2.8-3.9 cpu would be equal to a 2.3-3.3 before the bug, afaik. Thats hardly faster than 10 year old duals, wow!

>>they just need to crumble.

>Less competition will only make things worse for us consumers. ¯\_(ツ)_/¯


I'd say Intel has been enjoying the fruits of its pseudo monopoly for far too long now.


> Less competition will only make things worse for us consumers. ¯\_(ツ)_/¯

Intel is so large that it is using up too much of production capacity for anyone to enter the market. Intel crumbling = more resources for new players to get lower cost manufacturing capacities.


Someone posted these results on Reddit earlier: https://www.reddit.com/r/macgaming/comments/jvrck7/m1_macboo...

As far as I know, DotA 2 is running on Rosetta.


Those are some significant improvements. I'm actually really tempted to get a Macbook Air now versus my current plans to build a cheap gaming rig.


Ehh, if you are going to game you are going to want the Pro at a minimum but honestly the mac gaming scene is still sparse. It all really comes down to what type of game you want to play I guess but even if all the games you want are on mac then I'd still say you want the fan in the MBP.


I’m still playing nearly decade old games. I just need some thing competitive in decade old gaming so that my wife and I can play together.


Not Apple then. They ditched 32 bit applications and in 2 years they'll ditch 64 bit x86 applications.

You want your old games to run forever, you sadly have to do that on x86 Windows (or maybe Linux with more or less of a headache setting them up).


Agree 100%. If you wouldn't buy an iPad to play your games, don't buy a Mac to play those games either. Unless by "games" you literally just mean WoW, which seems to be one of the only major cross platform games that seriously cares about Mac support.

Personally, I would lean towards suggesting the purchase of a console. The new generation has some really nice consoles, and the Nintendo Switch is still really fun in other ways.


Rosetta 1 was around for 5 years, so I expect Rosetta 2 to last at least that long, but that's still a good point. Thanks!


Rosetta 2 only runs 64 bit apps though. Or so they say. 32 bit apps were dropped with Catalina.

Incidentally, I'm still on Mojave because of that.


I wonder if a full DotA match can be played which can last somewhere between 25 - 45 minutes. From Dave2d, M1 Air starts throttling after 8-9 minutes.


That link has some charts with FPS dropping over time as throttling kicks in. It drops from 110fps to 90fps.


Macbook air: $999 (2 ports, no hdmi)

vs

$699 mac mini (2 ports + hdmi) + ipad $329 = $1029

Thinking of upgraing, my current macbook 2013 sits in a drawer 99% of the time connected to a monitor and keyboard. That 1% of the time when I travel, the macbook is too large to use comfortably on an airplane seat. The ipad would work better for this use. macbook air also only has two ports, so one would be used for external monitor, the other for power. No place left to connect external drives. Which I need to use for video editing, and sometimes need to connect two drives to transfer files between them. Seems like this would only be doable on macbook air running on battery.


> macbook air also only has two ports, so one would be used for external monitor, the other for power. No place left to connect external drives.

You can do all this through one port. Most LG or Dell thunderbolt 3 monitors can supply 65 watts of power (some models may be higher, up to 80 or even 100w) any of which should be enough to run and charge this macbook air decently, and have 3 extra usb type A ports on the monitor.


I switched from a MBP to a Mac Mini and iPad Pro last winter for my personal setup. I absolutely love it. I spend roughly equal time on both, but I’m not doing a ton of programming these days outside of work. Taking the iPad traveling is way easier than taking a laptop.


Similar setup (desktop Mac plus iPad Pro with keyboard), and similarly happy with it, but I'm afraid once the COVID situation has been resolved I'll need a MacBook again. The iPad works surprisingly well as a laptop replacement, and I can get things done on it, but it's not an optimal environment for serious work.


I’m curious what you consider serious work? I write a lot on the iPad, can answer emails, get my shopping done, etc. I definitely wouldn’t use it for programming, but I know some people have set it up to do so.


Yeah, programming. I agree it's a pleasure to do the other tasks you mentioned on the iPad, but coding is cumbersome (although still possible). I'm using a VPS and Blink, the upshot is that you become decently efficient working with tmux/vim/etc.


Are you using the Magic Keyboard? Just asking if that is a big part of your good experience with the iPad.


Not who you replied to, but I have one of those non-numpad small Apple BT keyboards and it has worked really well for me when traveling with only my iPad. It fits in the same carry sleeve I use for the iPad, holds a long charge, and is usable when I ssh to a host for IRC or if I really want to code in VIM.


Nope, I’m using the Smart Keyboard folio, since it came with the iPad. (Bought secondhand from a friend.)


I wish apple or some company would make a computer stick that plugs into a usb c hub for power and peripherals. No battery and no buttons other than power.

I don't see why it couldn't about the size of a wallet and offer at least as good thermals as a macbook air.


The Mac Mini is only 7 inches x 7 inches x 1.5 inches. It's not a stick but you could certainly mount it on the back of your monitor or under your desk and never see it.


I carry my laptop to work and back with me. I want to ditch the laptop bag and just carry a stick to plug in anywhere.


Similar products do exist, although they're not very good: https://www.amazon.com/dp/B07KKYZL66


I've seen those. They all are made to plug into the back of a tv through hdmi.

HDMI doesn't have power and typically no peripherals like keyboard and mouse. To get that to work as I would want it would need multiple cables plugged into it.

I just want to stick my computer into a hub like a flash stick and have it boot up.


I use a CalDigit Thunderbolt 3 doc on my macbook pro. Website says it works with M1 macs, but you only get one screen. It delivers power (87W), network, and has a bunch of ports, including allowing thunderbolt daisy-chaining. It has a 10Gb/s usb-c port, and five 5Gb/s usb ports. That uses a single port. So if you had, e.g. two TB3 external drives, you could plug one into the dock, and one into the other port on the Mac. This dock might be overkill for you (its $250), but I used it to get dual external screens on my MBP before I said "fuck it" and bought an eGPU.


It's certainly an impressive achievement and makes it pretty clear why Apple is transitioning away from Intel. I'm a bit surprised that the fact that this is on the TSMC 5nm process seems to be glossed over in the comments. Apple is benefitting from some what seem like on the surface to be significant process improvements. Will be interesting to see how other players take advantage of it as well.


No, the gains from Apple's A13 (TSMC 7nm) to Apple's A14 (TSMC 5nm and same cores as the M1) really aren't that large. About average for a node jump.

This is mostly about architecture, not silicon process.

From A13 to A14, Apple managed to increase the clockspeed by about 15% and increase IPC by 5% all while keeping power consumption the same.


I believe that remains to be seen. I'm not sure if they were able to effectively take advantage of the new process right away, perhaps not. It looks impressive, but it's hard to say until we have some kind of more direct comparison.


This is as much a ringing endorsement of AMD Ryzen as it is of the M1. The 15W 4800U is just as impressive as the M1, and the performance per watt gap seems small enough to bridge with AMDs upcoming 5nm switch.


AMD definitely is the best of the rest, but it doesn't seem quite as close as it may be held to.

In single core tests the 4800u is running that core at 4.2Ghz. Yet it gets soundly bested by the M1 @ 3 - 3.2Ghz (running at a 50%+ advantage, 65%+ clock per clock). The M1 has an enormous IPC advantage.

In a multicore test the 4800u has 8 performance cores with HT. It only marginally beats an M1 with 4 performance cores and 4 efficiency cores (by the scaling the efficiency cores look like they're 1/4 performance or worse -- these are very lightweight cores).

Again, it's the best of the rest, but Apple clearly holds an enormous lead here. Somehow everyone is focused on 5nm, but the A13 on 7nm was still in a substantial lead. The 4800u on 5nm is only going to be marginally better.

Apple clearly sandbagged this first entrant because they're packing it into their "entry level" devices. In six months or whatever they'll unveil the 6+2 core device in the mid range, the 12+2 in the high range, etc, and we'll be back at these discussions.

(Speaking generally) - This whole discussion about Apple Silicon is fascinating because the goal posts have moved so much. Looking back to HN discussions a year ago and everyone was talking about some pathetically weak entrant that would be a joke, etc. Now people are celebrating that it doesn't beat a 24-core, 300W Threadripper. Now the narrative is that it isn't impressive because the v1 didn't overwhelming destroy everything else in the market.


> The 4800u on 5nm is only going to be marginally better.

This is definitely not true. We already know Zen 3 is +20% IPC over Zen 2 on the same process at the same power. So add 20% to the 4800U without changing anything else as a starting point.

Then toss in the process improvements from 5nm (which TSMC says is either 15% faster or 30% less power) as well as any further architectural improvements that AMD is doing in Zen 4 and there's going to be a very significant gap between the 4800U and AMD's 5nm 6800U or whatever they end up calling it.


To be clear, I said that a 4800u @ 5nm (if one could simply scale a design like that) would only be marginally better. That the 5nm boogeyman is more incremental than the big advantage it is held as.

You replied that if you take the 4800u, switched it to 5nm, switch it to Zen 3...no actually switch it to to Zen 4 and a completely different chip, it would be lots better so what I said is "definitely not true".

I'm not sure this logic follows.


A 4800U on 5nm would be 30% more efficient or 15% faster. 5nm was a significant bump.

And that's before considering the density improvement that came along with it (which is also substantial - TSMC's N5 is up to 1.8x the density of N7). Which is why I mentioned Zen 3 & Zen 4, because you don't make the same chip across a shrink. You use the extra budget to do things


When a die shrink is correlated with higher performance, generally that means higher clock speed for a given heat profile. The clock speed of the 4800U is already 50% higher than the M1 when running a single-thread task. And despite that, it is 25-50% slower. And it's significantly less energy efficient.

So put that to 5nm. Either you've partly closed the large efficiency gap, or you've closed the significant performance gap, but in neither case will you come close to closing it entirely. So you're either a little less slower, but still a lot less efficient, or a lot slower, but a little less efficient.

Yeah, maybe they'll do some amazing things on the core that'll overcome all of this. But right now the Firestorm core has a massive advantage. AMD can go to Zen4 and 5nm and maybe they'll close the gap, but Apple won't be sitting still.

Apple entered the desktop space and brought the most efficient core (by _far_), hilariously offering the best per core performance without going to magnitudes higher thermal profiles. It's pretty amazing. So now we're into comparing it to hypothetic, mythical alternatives from competitors in the future.

Sidenote: Apple has had blazing cores for a few years now, and every denier will claim it's just some big cache or some other absurd simplification. If that were the case, everyone would just copy them.


> In six months or whatever they'll unveil the 6+2 core device in the mid range, the 12+2 in the high range.

The current rumors are pointing to a single 8+4 chip for the higher end 13" Pro and a 16" Pro possibly with vastly improved graphics


You are making the mistake of equating TDP to actual power draw. The Yoga Slim 7, which uses a 4800U has an average load power of ~50W[0] vs the Mini's load power of ~30W.

[0]: https://www.notebookcheck.net/The-Ryzen-7-4800U-is-an-Absolu...


Not only that, but also the 4800U is Zen 2. The Zen 3 mobile processor (series 5000) are not here yet.


Yes, will Zen 3 give these mobile parts a 19% boost? That would be incredible if so...


It is an awesome CPU. But Apple have put in a sick GPU too.



Question is whether they were compiling for the same architecture on both x86 and arm.


That's a good question but I don't think it would make a huge difference. Those details should have been included.

Safari is already a universal binary on my Intel Mac running Big Sur; that means WebKit runs natively on Intel and M1 processors.


It doesn’t, I’ve been compiling both for a while now and the amount of time it takes for each is almost the same.


Wow, the M1 MBP is on par with the 12-core Mac Pro from 2019 for the WebKit compilation. And even more impressive: "After a single build of WebKit, the M1 MacBook Pro had a massive 91% of its battery left. I tried multiple tests here and I could have easily run a full build of WebKit 8-9 times on one charge of the M1 MacBook’s battery. In comparison, I could have gotten through about 3 on the 16” and the 13” 2020 model only had one go in it."


Interesting that it is not able to outperform the Zen3 CPUs. I had expected it to do somewhat better, especially it being a 5nm processor, and with all the hype around ARM processors.

I don't know how well it will hold up to its x86 competitors like this, especially once they launch their 5nm CPUs next year.


In multi-threaded mode - which is what Zen3 are optimized for - the M1 barely reaches 30% of the Zen's performance.

I mean that's kind of expected if you compare a low-power CPU with fewer cores against an unlimited-cooling desktop monster with much more cores.

The M1 will likely be an amazing laptop chip, but still unusable for demanding desktop work, e.g. CGI.


>I mean that's kind of expected if you compare a low-power CPU with fewer cores against an unlimited-cooling desktop monster with much more cores.

Are we looking at the same charts here? For cinebench multithreaded, the AMD 4xxx series CPUs are zen 2 parts with 15/35W TDP, hardly "unlimited-cooling desktop monster" like you described.


From the article: "While AMD’s Zen3 still holds the leads in several workloads, we need to remind ourselves that this comes at a great cost in power consumption in the +49W range while the Apple M1 here is using 7-8W total device active power."

Looking through the benchmarks, the zen 2 parts generally seem to have lower performance than the M1. The cinebench multithreaded benchmark is one exception. It's not that surprising because the 4800U has more cores than the M1 has high performance cores. The M1 wins the single threaded cinebench benchmark.


The Zen2 4800HS also outperformed the M1 in the Specint2017 multi-threaded results, too.

The M1's float results are weirdly good relative to the int results, though. Not sure why Apple seems to have prioritized that so much in this category of CPU.


Maybe because of javascript, where all numbers are floats?


Not strictly true.

Taking a loop and adding a bunch of `x|0` can also often boost performance by hinting that integers are fine (in fact, the JIT is free to do this anyway if detects that it can).

The most recent spec is also adding BigInt. Additionally, integer typed arrays have existed since the 1.0 release in 2011 (I believe they were even seeing work as early as 2006 or so with canvas3D).


It's a higher TDP part (I think - it's 35W) and has more high performance cores, so it's not surprising that it would win some of the multicore benchmarks.


I've posted this elsewhere in this thread, but the M1 on SPEC reaches Desktop-tier performance, going toe to toe with the 5950X: https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


In single core performance, yes, but as the next page on the article shows, it's more comparable to the 4900HS, AMDs mobile CPU in multithreaded performance.

https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


Yes, sorry if that came off as misleading. I'll edit this in elsewhere.


The 4900HS has a 35W TDP, though. The M1 in the Mac Mini is estimated at around 20-24W TDP.


We might be able to chalk that level of difference up to process advantage.


The 4900HS has way more I/O and is on 7nm, as well as has more powerful graphics.


I guess we're talking at cross purposes. I was just making the point that the 4900HS isn't really a competitor to the M1 because it's in a different TDP class. It looks like Apple wanted to stick with one chip for their first generation products, but they could presumably at least throw in a few extra cores if they had another 10-15W to play with.


Well no, but then again there is the 4800S at half the TDP that gets close in single core performance and wins in multicore performance, so there pretty much no way they could've beaten it at another power budget.

Indeed, it they were to add a few more cores to their M1, then AMD could have also thrown a few more cores in their 4800, and it would have been a wash.


Have you seen benchmarks post 1st page? M1 at 7-8W power drain beats or just trails behind a desktop class $799 Ryzen 9 5950x at +49W consumption in single threaded performance. What did you expect?


5950X's CPU cores at 5ghz consume around 20w, not +49W. And it's extremely non-linear power scaling, such that at 4.2ghz it's already down to half the power consumption at 10w/core.

The 5950X's uncore consumes a significant amount of power, but penalizing it for that seems more than a little unreasonable. The M1 is getting power wins from avoiding the need for externalized IO for GPU or DRAM, but those aren't strictly speaking advantages either. I, for one, will gladly pay 20w of power to have expandable RAM & PCI-E slots in a device the size of the Mac Mini much less anything larger. In a laptop of course that doesn't make as much sense, but in a laptop the Ryzen's uncore also isn't 20w (see the also excellent power efficiency of the 4800U and 4900HS)


That doesn't say too much. There is a single thread performance ceiling that all CPUs based on current lithography technology available just bump against and can't overcome. The Ryzen probably marks that ceiling for now, and the M1 comes impressively close against it, especially considering its wattage.

But you cannot extrapolate these numbers (to multi-core performance or to more cores or to a possible M2 with a larger TDP envelope), nor can you even directly compare them. The Ryzen 9 5950x makes an entirely different trade-off with regard to number of cores per CPU, supported memory, etc., which allows for more cores, more memory, more everything...and that comes at a cost in terms of die space as well as power consumption. If AMD had designed this CPU to be much more constrained in those dimensions and thus much more similar to what the M1 offers, they would surely have been able to considerably drive down power consumption - in fact, their smaller units 4800U and 4900HS which were also benchmarked and which offer really good multithreading performance for their power envelope, even better than the M1, clearly demonstrate this fact.

What I read out of these benchmark numbers is: the ISA does matter far less than most people seem to assume. ARM is no magic sauce in terms of performance at all - instead, it's "magic legal sauce", because it allows anyone (here: Apple; over there: Amazon) to construct their own high-end CPUs with industry-leading performance, which the x86 instruction set cannot do due to its licensing constraints.

Both ISAs, x86_64 and ARM, apparently allow well-funded companies with the necessary top talent to build CPUs that max out whatever performance you can get out of the currently available lithography processes and out of the current state of the art in CPU design.


> What I read out of these benchmark numbers is: the ISA does matter far less than most people seem to assume.

This was my conclusion too. Does this mean, there is not much possibility of desktop pcs moving to ARM anytime soon? Perhaps, laptops might move to ARM processors, but even that seems iffy, if AMD can come up with more efficient processors (and Intel too with its Lakefield hybrid cpu)


Yeah, as someone whose next laptop wont be a Mac again, this was a good ad for what AMD has achieved lately. MyLenovo P1 got a Intel Xeon of some kind, and while I'm otherwise very happy with the Laptop, the CPU is hot, uses way too much power and constantly throttles.


I fully expect the 16" MBP to launch with a 12 or 16-core Apple chip.


Interesting that your takeaway from all this is "oh, it can't beat some of the top x86 chips in existence—it can only meet them on even footing. Guess it'll be falling behind next year."

This is Apple's first non-mobile chip ever. You think this is the best they can do, ever?


> This is Apple's first non-mobile chip ever. You think this is the best they can do, ever?

They have been making mobile ARM chips for quite some time, so it's not like they are inexperienced.


Look at the tread lines. They been able to keep increasing single core performance every year. There is no reason to think that is stopping this year.


They increased IPC only around 5% with A14. The remaining performance increase was from clockspeeds (gained without increasing power due to 5nm).

Short, wide architectures are historically harder to frequency scale (and given how power vs clocks tapers off at the end of that scale, it's not a bad thing IMO).

4nm isn't shipping until 2022 (and isn't a full node). TSMC says that the 5 to 3nm change will be identical to the 7 to 5nm change (+15% performance or -30% power consumption).

Any changes next year will have to come through pure architecture changes or bigger chips. I'm betting on more modest 5-10% improvements on the low-end and larger 10-20% improvements on a larger chip with a bunch of cache tweaks and higher TDP.

Intel 10nm+ "SuperFin" will probably be fixing the major problems, improving performance, and slightly decreasing sizes for a final architecture much closer to TSMC N7.

I'm thinking that AMD ships their mobile chips with N6 instead of N7 for the density and mild power savings (it's supposedly a minor change and the mobile design is a separate chip anyway). Late next year we should be seeing Zen 4 on 5nm. That should be an interesting situation and will help resolve any questions of process vs architecture.


I agree that most of the gains were due to the node shrink. However, being able to stick to these tick tock gains for the last several years is impressive. They could have hit a wall in architecture and were bailed out by the node shrink but I doubt they would have switch away from Intel if that was the case.


I’d argue that the M1 is a mobile chip and it’s the low end model. You’re still right, the M1 is no where near the best Apple is able to deliver.


I'd expect NVIDIA to join the ARM CPU race, too. And they have experience with the tooling for lots and lots of cores from CUDA. So I'd expect to have 5x to 10x the M1's performance available for desktops in 1-2 years. In fact, AMD's MI100 accelerator already has roughly 10x the FLOPS on 64bit.

That said, it's an amazing notebook CPU.


To quote from Ars Technica's review of the M1 by Jim Salter [0]:

> Although it's extremely difficult to get accurate Apples-to-non-Apples benchmarks on this new architecture, I feel confident in saying that this truly is a world-leading design—you can get faster raw CPU performance, but only on power-is-no-object desktop or server CPUs. Similarly, you can beat the M1's GPU with high-end Nvidia or Radeon desktop cards—but only at a massive disparity in power, physical size, and heat.

...So, given that, and assuming that Apple will attempt to compete with them, I think it likely that they will, at the very least, be able to match them on even footing, when freed from the constraints of size, heat, and power that are relevant to notebook chips.

[0] https://arstechnica.com/gadgets/2020/11/hands-on-with-the-ap...


>I'd expect NVIDIA to join the ARM CPU race, too.

Nvidia has been making ARM SoCs since 2008. They have been used in cars, tablets, phones, and entertainment systems.

What do you think powers the Nintendo Switch?


Agree. Yeah I should have thought about the Switch and write things more clearly.

I meant that NVIDIA will start producing ARM CPUs optimized for peak data-center performance, similar to how they now have CUDA accelerator cards for data centers, which are starting to diverge from desktop GPUs.

In the past, NVIDIA's ARM division mostly focussed on mobile SoCs. Now that Graviton and M1 are here, I'd expect NVIDIA to also produce high-wattage ARM CPUs.


Am I reading this right? Is the new mac mini competing with a Ryzen 5950X? The whole mac mini costs the price of that processor alone. This is insane.


Only in single threaded performance, which nobody actually uses for rendering.

In multi-threaded, the Ryzen 5950X is at 28,641 while M1 is at 7,833. So no, the Mac Mini is maxing out at 27% of the Ryzen 5950X if you use it properly. And I was already friendly and used the M1 number for a native port, while in reality you'll likely need Rosetta and take a 33% performance hit.


I think the overall point is that for the average user, who doesn't need all those cores or could make good use of them, the M1 may in fact feel / be faster.

For users like you or I, of course we'd see a huge difference, but not everyone is running workloads that need more than 2 or 4 cores.


An average user is going to buy a 5600X or whatever not the 5950X, and the 5600X's single-threaded performance is barely behind the 5950X. You only get a 5950X if you want multi-threaded performance.


I have a theory on why ST perf is always the most important metric for me, and some other folks. When you're waiting for something synchronously, like rendering a webpage, stuff to open, etc. you're usually running a ST load. For stuff that can benefit from multithreading it's usually planned task. So does it make a difference if it takes 4 minutes compared to 3? You will still context switch.


Right now I have ~20 tabs open and a few apps, a workload which is probably similar to the average user. My machine currently has 510 processes running with 2379 threads, though most of them are background. I'd wager core count is more important than ST performance nowadays, especially considering the fact that applications seem to be multicore optimized.


I’d check your activity monitor to see how many of those are sleeping. My suspicion is that most of them probably are, likely to the point where you are using “less than a core” to handle the load.


For users like you or I, of course we'd see a huge difference, but not everyone is running workloads that need more than 2 or 4 cores.

It’s hard to imagine a regular person playing games or editing the family photos or editing the kid's birthday party videos aren't using multiple cores for almost everything they do.

Even browsing the web these days uses multiple cores.

Apple wouldn't have made the investment if people couldn't see and feel real world results.


Just using a web browser these days requires many threads and processes to run at once.


Depends on what you're doing. For example, compiling is multi-core, but linking is normally single-core. Many workloads are still heavily single-core-dependent, so great single-core performance is still a big asset.


> linking is normally single-core.

GNU gold was doing threaded linking 15 years ago, and nowadays threaded linking is the default for new linkers like LLVM's lld. Unless you use very specific GNU linker hacks, there aren't any reason to not use lld, it works fine for linking large software like LLVM/Clang, Qt, ffmpeg...


Parts of the linker basically have to run in a single core, though.


Yeah, but this is a laptop chip at ~20W. Of course it's not going to compete with a 16-core 120W monster.

Getting 1/4 of the performance with 1/4 of the (high perf) cores and 1/6 of the power is very impressive.


But the Ryzen 5950X has 16 cores while the M1 has only 4 high performance and 4 low performance cores. So the Ryzen gets 4x multi-core performance with 4x the cores.


I wonder if Apple will bother to produce a CPU with desktop-level TDP. That would really compete with the Ryzens.


I really hope so. And I’d think they’d want something to put in their new iMac, which could be a CPU with 16 M1-cores resulting in a 100 watt TDP.


> The MacBook Air and MacBook Pro get chips with all 8 GPU cores enabled. Meanwhile for the Mac Mini, it depends on the SKU: the entry-level model gets a 7 core configuration, while the higher-tier model gets 8 cores.

This appears to be wrong. From what I can see on apple.com, the Air offers the choice of a 7 or 8 core GPU, while the Pro and mini start with the full chip.


I hope Apple allows us to install the OS of our choice. The battery life is impressive but I refuse to not use Linux.


They have already stated they won’t:

“We’re not direct booting an alternate operating system,” says Craig Federighi, Apple’s senior vice president of software engineering. “Purely virtualization is the route. [...]”

https://www.theverge.com/2020/6/24/21302213/apple-silicon-ma...


Another item added to the list of why I'm not buying one of these.


I feel kind of grossed out, as a developer (and tinkerer) by how locked down Mac products are. It's not really your computer, you're just renting. Apple has decided that they know what you want and need better than you.


It's really kind of tragic that so much incredible research and engineering work goes into creating new hardware like this only for it to be locked into one particular company with very tight constraints on target audience, income bracket, and technical limitations. Think how incredible it would be if everyone could use this new silicon.


It is, in fact, already used by everyone, because it's an evolution of the chipset in basically every smartphone in the world with widely divergent target audiences, income brackets, and technical limitations.


I don't know that that's a fair comparison. Just because it's an ARMv8 chip doesn't mean it's directly comparable to what's in smartphones. (I assume you aren't comparing it to Apple made chips for iPhone specifically, since then it wouldn't be true that it's in "basically every smartphone in the world".)

In particular, this is the first 5nm chip to be widely available, and by most accounts on performance it competes with top of the line hardware at a small fraction of the power use. Most existing ARM chips are designed for the very-low-power market, e.g. in phones, not to be used in a high performance laptop.

If there's a Dell or Thinkpad laptop with an ARM chip that's comparable, by all means, let me know.


AFAIK you are correct. Apple has completely redesigned their own ARM chip. It has the same instruction set (or a superset of the instruction set) as what runs in a cellphone, but the design is completely different from say, Qualcomm chips.


Just because it is locked down, why is that the same as "renting"? Those are two very different concepts.


Because you are not the ultimate decider of what to do with the machine. If you owned it, you could do anything outside of harm.


The whole concept of the machine is to be bought and optimized for running macOS.


I guess, but the "whole concept of the machine" that I'm typing this on was to run Windows... 7 (I think?); that's a completely artificial limitation, as shown by running Ubuntu on it years after the hardware went out of support.


I'm not sure what the problem is, then. You have a device that does what you (or the GP) want, which is to install any operating system, tinker, etc.

Is the worry that Apple and its practices will dominate the industry to the point that you literally will not be able to turn on your current machine and use it?


> Is the worry that Apple and its practices will dominate the industry to the point that you literally will not be able to turn on your current machine and use it?

I know you're joking, but I actually kind of am...

Apple has a tremendous amount of industry influence, just see removal of the headphone jack.


macOS deprecates support for Macs that are 5-7 years old with every release. I put Linux on them when new macOS releases no longer support them, and they're perfectly good machines afterwards.

When macOS deprecates support for these ARM Macs in 5-7 years, Linux isn't an option for them unless Apple puts in a lot of work to support a mainline Linux kernel on their hardware. Apple has said they won't support running other operating systems on these ARM Macs unless they're virtualized.


>When macOS deprecates support for these ARM Macs in 5-7 years, Linux isn't an option for them unless Apple puts in a lot of work to support a mainline Linux kernel on their hardware.

Why would Apple need to "put a lot of work in"? Apple doesn't support Linux on 86 either. Third parties did the Mac Linux ports for 86, and will do them for the ARM Macs.

The only thing Apple needs to do is to not lock the ARM Macs from booting another OS, which is very easy to do -- Apple doesn't need to invest lots of work to run Linux on ARM Macs, just needs not to prevent it.


> Why would Apple need to "put a lot of work in"? Apple doesn't support Linux on 86 either. Third parties did the Mac Linux ports for 86, and will do them for the ARM Macs.

Because ARM SoCs are fundamentally different than 32-bit and 64-bit x86 machines. The prime difference is the lack of an enumerable bus that even some ARM servers have, but are missing in ARM SoCs.

I bought an x86 Mac when they were first released and I was able to boot an Ubuntu live CD when I got it. No work was needed to get a mainline kernel running on a x86 Mac, but work was needed to support things like Apple's SMC and cameras etc.

> The only thing Apple needs to do is to not lock the ARM Macs from booting another OS, which is very easy to do -- Apple doesn't need to invest lots of work to run Linux on ARM Macs, just needs not to prevent it.

This is not true. Given the lack of an enumerable bus, someone will need to either fork the kernel and hardcode addresses for hardware, or someone will need documents to build out the DeviceTree. If hardware doesn't conform to existing standards, which nearly every ARM SoC follows their own, someone will need to do further work port the kernel to the machine. All the special deviations from standards that Apple baked into their hardware either needs to be documented accurately, or Apple needs to put the work in to get mainline Linux running on their SoCs.

This is a general problem in the ARM SoC and Linux space, and is not unique to Apple's SoCs. There are millions of ARM SoCs that are either stuck on old kernel forks because vendors never put the work in to get mainline Linux to support their SoCs, or they will never run Linux at all, ever. I don't even think all of the Raspberry Pi models have mainline support yet, and those that do only have it because of the work put in by the RPi Foundation, which has access to some vendor documentation, but I don't believe all.

To get an idea of the scope of the problem concerning Linux support on ARM SoCs, check out this presentation[1].

[1] https://elinux.org/images/a/ad/Arm-soc-checklist.pdf


Right, the point is that it didn't use to be that way exclusively and now it is, so the new machines are more restrictive than previous Macs, which also ran macOS.

In fact macOS itself is more restrictive nowdays than it used to be.


Apple could optimize their hardware and software without making the machine locked down. Those are somewhat orthogonal issues.


But renting implies you are continuing to pay money and will some day need to return it.


There's a direct parallel you can draw between software licensing and leasing.


But we’re not talking about software leases.


No, not necessarily. Renting just implies you're not the owner and need to follow someone's rules, (that of the actual owner), in order to make use of the rented item.

'Purchasing' a Kindle book or video on Amazon is also renting for example and yet it does not mean you have to continue paying and yet you don't own the copy as Amazon's going to decide how you're allowed to consume it and if they're going to let you keep it[1][2].

1 - https://en.wikipedia.org/wiki/Amazon_Kindle#Criticism

2 - https://www.hollywoodreporter.com/thr-esq/amazon-argues-user...


I don’t think purchasing a computer is the same thing as buying a movie from Amazon. The computer is always gonna be yours, and you can do whatever you want with it, even if Apple has made it very difficult to do so. But there are lots of objects in my house that would fall under that category as well, but I consider myself as their owner.


I prefer for the class of device the Air fits into (travel, work laptop) to have a nicely curated nix machine with working drivers out of the box. Apple has continued to improve on this by making this product class faster, more battery efficient, and* cheaper.

There is a massive marketplace for tinkering on computers, from Arduinos to multi-GPU ML rigs. Trying to optimize for both classes of things seems like a foolish endeavor, especially when Linux users represent such a small fraction of the desktop market.


I hear this all the time from people "drivers working out of the box", but I've been running Linux machines for a decade now, and I've run into very few issues comparatively speaking. My work makes me use a MacBook for work, and it has a lot of significant bugs that are not getting fixed. The trick with Linux is to use a popular distribution. The one thing I will fully concede is that Linux laptops have poor battery life.


>I feel kind of grossed out, as a developer (and tinkerer) by how locked down Mac products are

That's part of the value proposition (leave it or take it).


Hopefully it's not just secret apple sauce that makes these powerhouses, and other chip makers make arm based processors soon enough giving us the choice we deseeve. (given gravitons similar performance bump this is likely the case)


It doesn't make a huge amount of sense to buy a Mac if you're not going to use Mac OS as your daily driver. A lot of the benefits (e.g. battery life, touchpad quality) are dependent on software as well as hardware, and are greatly diminished on Windows or Linux.


I've never been that impressed with the Mac Mini's battery life or touchpad :-)


Actually I had a Mac mini with the touchpad and the damn thing disconnected three times a day. All my input devices have wires now and the stick out of the right places.


Touché. But seriously, most people who want to run Linux on Mac want to do it because they like Apple's laptop hardware. If you want a compact Linux desktop then a NUC should probably serve you just as well. Or at least, this was the case while Apple was still using Intel chips. If Apple Silicon lives up to expectations then I suppose there could finally be a compelling reason for running Linux on a Mac desktop.

To be clear, I'm not saying that there couldn't possibly be any good reason for wanting to run Linux on a Mac desktop. But desktops are already a niche product for Apple, and people who want to run Linux on Mac desktops are arguably a tiny niche within a niche.


Mac Mini doesn't raise those issues.


libinput's touchpad support is pretty great recently. working on an xps 17, and the touchpad is - no joke - just like the touchpad on my previous MBP.


You can speculate but we will never know for certain.


I realize Federighi's reply seems to rule out Linux, but the context of the question seemed to be with respect to Boot Camp and Windows. My take is that Apple doesn't want to continue to invest in Boot Camp, especially since Microsoft apparently isn't willing to license ARM Windows for this use case.

It's not clear to me that the new Macs won't allow booting Linux if the Linux community can figure out how to do it. The number of folks booting Linux on Mac via Boot Camp has to be really tiny.


> It's not clear to me that the new Macs won't allow booting Linux if the Linux community can figure out how to do it.

Mainline Linux support requires a lot of work from vendors. Check out the ARM SoC Linux market for an abundance of examples of this problem. Many of the devices will be forever stuck an old kernel fork and will never run a mainline kernel.


Getting drivers to work will be hard without Apple's help or blessing. And there are a lot of drivers.

For comparison you can check the progress of Linux on iPhones (which is actually a thing!)


Yeah, agreed, but my take isn't that Apple is going out of their way to prevent it, just that they have no interest in spending any resources on it. Some conjecture here about what will be possible:

https://forums.macrumors.com/threads/running-linux-on-apple-...


Proprietary original GPU must be a problem.


But you can disable Secure Boot and boot whatever OS you want, so unless there's some other hardware gotcha it's not like someone couldn't get Linux running if they wanted to put the time in (which is a big if, considering there's no UEFI-ish helper like on the Windows ARM devices).

https://support.apple.com/guide/mac-help/macos-recovery-a-ma...


There are quite literally millions of ARM devices out there that will never have Linux support, and millions more are being produced each year.

When it comes to ARM SoCs, Linux requires vendor support to get it running. If you want mainline kernel support, that requires even more work that many vendors just aren't providing.

A locked bootloader is just one issue to overcome for Linux support. A lot of the real issues come down to the lack of an enumerable bus on ARM SoCs, along with a lack of drivers.

Without vendor support from Apple to support Linux, these devices will be like the millions of iPhones and iPads that don't run Linux and will never run Linux.

Most ARM SoCs that are sold explicitly as mini Linux computers also have this problem. Many of them are stuck on old kernel forks, because vendors didn't give the proper support their SoCs needed to run a mainline Linux kernel.

tl;dr: For Linux to be a viable option on Apple's SoCs, Apple needs to put in a lot of work to explicitly support Linux. Without that vendor support, you will never be able to download a Linux ISO and install it like you can on an x86 Mac.


Millions of iPhones and iPads that don’t do what now? https://projectsandcastle.org/


There's a gulf between getting a kernel fork to run on an ARM SoC and getting mainline Linux support for it.



You’re misunderstanding that quote. Apple has never claimed they won’t support booting something else (in fact, there are ways to enable this by removing signature checks); they were just explaining how their demo works.


Challenge accepted :)


You’re likely to hit the common problems porters face with putting Linux on an arbitrary ARM SoC. These chips have lots of integrated components on them, requiring device drivers that may not exist for Linux. Take the custom Apple developed in house GPUs for example. Good luck finding any kind of Linux device driver for those, open source or not. It gets even worse for things there isn’t even an external equivalent of, like the neural engines.

Even if Apple does nothing to stop you running whatever software you like on the device, you’re still likely to be out of luck. I wouldn’t be surprised if some enterprising folks have a good run at it, but it’s likely to be a massive undertaking.


I can pretty much guarantee you that trying to run anything other than macOS on Apple's silicon is going to be an exercise in frustration. You will presumably be able to run an Arm build of Linux in a VM--given that Apple has demoed this--but if you want native Linux, I'm not sure why you would pay a premium to possibly get a bit more performance on a laptop while probably having various support issues.


If you want to use Linux for the tools, then just use a VM.

But if you want full control over your hardware... Apple isn't the way to go. I'm not even sure what the "OS of our choice" means when we're talking about a custom-designed SoC. The amount of reverse-engineering required to get any other OS to work would be staggering, no?

If you want to run a custom OS natively, you need to buy a laptop with a commodity chip, not a custom one. Fortunately, there are tons of them.


ARM actually has a defined architecture and UEFI equivalent, which would have worked wonders here.

If Apple had decided to support it, that is.


I hope Apple allows us to install the OS of our choice. The battery life is impressive but I refuse to not use Linux.

Apple's hypervisor technology runs natively on the M1; Linux running on that will be faster than Linux running on anything else you can buy for the same amount of money.

They showed Debian running on Apple Silicon during the WWDC keynote nearly 6 months ago.


Tuxedo Computing and Slimbook both sell Ryzen 4800H computers that will outperform the M1 in heavy multithreaded workloads and come with Linux preinstalled. These laptops aren’t quite as slick as the MBP but weigh in at 1.5kg, have huge 91Wh batteries, and have a better keyboard (I have one from a different OEM, but same ODM design). They also have user upgradable memory and storage - I am running with 64GB RAM and 2TB SSD at a total cost (with upgrades) of less than what Apple is charging for their base 8GB/256GB MacBook Pro.

I expect a future “M2” to maybe take the performance crown, but AMD isn’t standing still. Cezanne has Zen 3 cores, which should boost IPC by about 20%, and Rembrandt should get to 5nm and have RDNA2 graphics.


Tuxedo Computing and Slimbook both sell Ryzen 4800H computers that will outperform the M1 in heavy multithreaded workloads and come with Linux preinstalled. These laptops aren’t quite as slick as the MBP but weigh in at 1.5kg, have huge 91Wh batteries…

1. You're not going to get 20 hours of battery life.

2. Don't forget it's not just the M1—it's the unified memory, the 8 GPU cores and the 16-core Neural Engine. Most CPU and GPU-intensive apps are going to run faster on the M1 than on your machine. Even x86-64 apps using Rosetta 2 on an M1 Mac may run faster, since those apps are translated to native code on the M1.

3. Mac's SSD is probably faster; it's essentially a 256GB cache for the processor.

4. The Mac can run iOS/iPadOS apps too.

5. If done right, Linux compiled for the M1 will likely run faster on an M1 Mac than it does on a machine like yours, especially if Apple provides a way to access certain hardware features.

We’ll have to see what happens but expect these machines to be pretty popular with users, even those who need to run Linux when that the distros are updated.

We shouldn't forget that the underpinnings to all of this is Darwin, the BSD-derived Unix layer which is already running natively on M1, including the compiler and the rest of the toolchain.


> 1. You're not going to get 20 hours of battery life.

Sorry to burst your bubble, but you're not going to get 20 hours of battery life in real world usage on the M1 either. The early tests show about 10-12h, which is the same as my (and many other) laptops under regular usage.

> 2. Don't forget it's not just the M1—it's the unified memory, the 8 GPU cores and the 16-core Neural Engine. Most CPU and GPU-intensive apps are going to run faster on the M1 than on your machine. Even x86-64 apps using Rosetta 2 on an M1 Mac may run faster, since those apps are translated to native code on the M1.

Now it feels like you're just regurgitating marketing talking points. Can you tell me what "unified memory" even is exactly? Is it zero-copy support, because AMD has had that on its APUs since... 2013 or thereabouts. Is it LPDDR4 on a pop package, because all that means to me as an end user is I can never upgrade my memory and that I'm limited 16GB of memory (which I regularly go over - I am using 19GB of RAM right now just with browser tabs open). As for performance, we already know from the early testing that the M1 under-performs 8C Zen2 for heavy MT workloads like compiles and renders, so ... what are you saying exactly, somehow running software via emulation/translation will magically make that faster?

> 3. Mac's SSD is probably faster; it's essentially a 256GB cache for the processor.

Again would you simply assume that a Mac's SSD is "probably faster"? It in fact is not. The 256GB SSD on the M1 MBA was tested at 2676MB/s reads, my value NVMe SSD, a $200 2TB ADATA SX8200PNP does 2917 MB/s on my laptop. As for SSD as cache - what are you talking about? L2/L3 latency is typically about 10ns latency. NVMe latency is typically on the order of hundreds of microseconds, roughly 10,000X slower.

> 4. The Mac can run iOS/iPadOS apps too.

Poorly, but I mean, but surely this irrelevant to Linux performance?

> 5. If done right, Linux compiled for the M1 will likely run faster on an M1 Mac than it does on a machine like yours, especially if Apple provides a way to access certain hardware features.

Which hardware features? This is rhetorical. I know this is just hand-waving.

> We’ll have to see what happens but expect these machines to be pretty popular with users, even those who need to run Linux when that the distros are updated.

We'll see what happens. You can track the state of Docker here, for example: https://news.ycombinator.com/item?id=25119396

> We shouldn't forget that the underpinnings to all of this is Darwin, the BSD-derived Unix layer which is already running natively on M1, including the compiler and the rest of the toolchain.

Darwin/macOS may be POSIX compatible, but it is not production compatible with Linux. Like lots of other devs, I've used Macs in the past (for many years) and you always run into compatibility issues small and not so small until you're either running either a completely parallel devchain via Homebrew or MacPorts, or in a VM. Honestly, WSL these days is a more Linux-friendly dev environment than macOS. But then again, it's even easier/better to run Linux and Docker these days.


The early tests show about 10-12h, which is the same as my (and many other) laptops under regular usage.

Here's an early test that’s quite different from what you described. I’d bet dollars to doughnuts your laptop can't play fullscreen, 4k/60fps video for 20 hours using only the battery:

In fullscreen 4k/60 video playback, the M1 fares even better, clocking an easy 20 hours with fixed 50% brightness. On an earlier test, I left the auto-adjust on and it crossed the 24 hour mark easily. Yeah, a full day. That’s an iOS-like milestone.

Another one: Just 17% of the battery to output an 81GB 8k render.

These are just a couple of highlights from the article "Yeah, Apple’s M1 MacBook Pro is powerful, but it’s the battery life that will blow you away": https://techcrunch.com/2020/11/17/yeah-apples-m1-macbook-pro...

You have to look at the totality of the what's going on.

In short, the M1 Macs are right up there with the fastest machines available at reasonable prices and at a fraction of the power consumption.

The machines set a new level of performance per watt and there's no disputing that. That's pretty good for their first attempt at Apple Silicon Macs.


> The early tests show about 10-12h, which is the same as my (and many other) laptops under regular usage.

Yeah, laptops that look like bricks.

> Can you tell me what "unified memory" even is exactly?

GPU shares memory with the AP

> Is it LPDDR4 on a pop package

On SoC, no PoP

> I'm limited 16GB of memory

Wait for new hardware

> which I regularly go over - I am using 19GB of RAM right now just with browser tabs open

This isn’t how memory works :/


> Yeah, laptops that look like bricks.

Eh, the laptop I'm using at the moment has a 15.6" display and 91Wh battery and is less than 100g heavier and about 1mm thicker than the 13" MBP. It's also 500g lighter than the 16" MBP. Lots of other properly tuned modern x86 laptops can perform similarly. For example the 14" 1.48kg 18mm 56Wh battery HP EliteBook 845 G7 manages >12h on NBC's wifi websurfing test: https://www.notebookcheck.net/HP-EliteBook-845-G7-review-AMD...

> This isn’t how memory works :/

Fair point that free might not be the best way to measure things, I have more tabs open now but still not doing work (obviously), so let's compare:

                total        used        free      shared  buff/cache   available
  Mem:       65328424    26679236    31393956     1165140     7255232    36284812
  Swap:      67108860     3197072    63911788
With totaling per-process shared/private memory (I uses memstat.sh for this). And the total I get is: 18.37 GiB - lower, but actually not so far off.

This is only with a two browsers (a few hundred tabs) and some resident electron apps open, mind you. Before upgrading (w/ 16GB memory) I was often hitting swap, and now I'm not. But if you don't ever need >16GB of RAM, lucky for you I guess.


I wouldn't pay for Apple hardware unless I wanted to use MacOS.


Why? The hardware's the nice bit.


Yeah this. Imagine if we had the same hardware but designed for linux, I'd pay a premium for that.

Although hardware specific software from Apple is probably a big part of that draw too. I don't think we're ever going to see Linux prioritize a certain hardware and put in the effort to make it integrate as well as macs does.


I don't really get this. I switched to MacOS because it's fundamentally BSD with a nice/ well integrated GUI. Almost all of the good OSS I love is supported nearly perfectly.

Perhaps I'm a bit jaded after running into too much bullshit trying to get Linux running well on laptops in the 90s and 00s. Since I made the move I never wax nostalgic for the "Good Ole Days" of fighting for hours to get Wifi working properly.

Even assuming Apple released the specs so you could port Linux to M1, on top of the usual laptop driver issues around the trackpad, wifi drivers, and video drivers, you also have to deal with the Secure Enclave. Without that, you are stuck with either a non-encrypted drive or running drive encryption on the CPU which is likely going to kill many of the performance gains from using the Mac hardware. Likewise, without the Secure Enclave, you lose fingerprint auth.

Not anti-Linux by any means, but dropping Linux on the M1 isn't going to get you the same performance or battery life by any means. You are far better just going with a laptop which was designed to be Linux friendly to start with.


> Since I made the move I never wax nostalgic for the "Good Ole Days" of fighting for hours to get Wifi working properly.

I can assure you that you didn't have to do that for quite some time and it's not that which people are looking for.

- Am looking for a system that lets me run any damn thing I want without pipups, blocks, firewalls, warnings, requiring signed binaries etc.

I am looking to run and develop for the same environment I end up deploying on.

- I want a system that has native docker support, systemd and makes updating the whole system or installing pretty much anything as easy as one terminal command.

- It's important for me to trust my system; where I know no single entity has more power over the machine than myself and no secret upgrades I didn't desire are going to be pushed my way.

- There's no telemetry in my ideal system, certainly not at the system level and patched out at the app level where possible.

- I want a system that is open, configurable, respects the four freedoms and is community ran.

macOS cannot give me this, no matter how "fundamentally BSD" it is. I value the freedom that free software gives that no closed-source BSD ever could.


IMO, the BSD/Darwin stuff isn’t the problem, but rather all the recent additions that are just super invasive/restrictive/bloated - Gatekeeper and trustd, that in my experience often (not just when OCSP is down) chewed through CPU often for example. IMO, even a few years ago (when I mostly switched off from Macs) the LaunchDaemon/Agent situation was getting totally out of control, as were notifications and updates (worse than Win10 even).

Here’s a script (that no longer works apparently due to a new system signing restriction) that disabled some of those, to give an example of the amount of crap running by default: https://gist.github.com/pwnsdx/1217727ca57de2dd2a372afdd7a0f...


> I switched to MacOS because it's fundamentally BSD with a nice/ well integrated GUI. Almost all of the good OSS I love is supported nearly perfectly.

This is the reason I initially started using macOS more than a decade ago.

However, I've been told that I'm the wrong kind of user by Apple fans whenever I criticize Apple for transforming macOS from a pretty Unix into a locked-down App Store appliance.

The BSD parts of macOS are getting old and crufty, and are being locked out and overridden by Apple's proprietary and significantly-undocumented layer. For an example of this, check out how networking is done on modern macOS versus how networking is done on a BSD or Linux.

> Perhaps I'm a bit jaded after running into too much bullshit trying to get Linux running well on laptops in the 90s and 00s

Linux has gotten much better, and the problems of the 90s and 00s have vanished for my use case.

These days, at least to me, Linux is the pretty Unix that just works that macOS used to be.


Well... That's why I have a ThinkPad, that is certified on Linux. (So your prejudice is dated)

I'm literally trying to figure out how to install Python 3.6 alongside 3.9 in MacOSX .... right now, and it's not a one line command.

So... No. It has massive issues with developer friendliness. New OSX stalls with bluetooth mice and randomly locks my keyboard(MBP 2020). The only thing I can commend OSX on - battery life on a MacBook and nothing else


> install Python 3.6 alongside 3.9 in MacOSX .... right now, and it's not a one line command

To be fair, that's not easy on any OS (well, maybe Windows). Certainly on CentOS it is a chore to get two versions of Python installed simultaneously.


I'm on Ubuntu - it's not as mindbogglingly hard as on MacOSX.

Unsupported versions - harder, but still a few commands...

Supported versions? sudo apt install python-3.6 and done.


Python: another way to do this is to install Anaconda and then spin up virtual environments with specific Python versions.

  conda create -n myenv python=3.6
Having multiple versions of system Pythons can be complicated. I've learned not to touch the system Python.


pyenv[1] might help you out in this department. It's also cross-platform.

[1] https://github.com/pyenv/pyenv


I've never used another BSD, but the reasoning for my 'I prefer Linux' complaint could I think equally be said to conclude almost any non-Mac Unix - my primary concern or annoyance is configurability.

Sure, it has a 'nice/well integrated GUI', but I'm not allowed to choose a different one. Good luck configuring the one they give you for different machines without lots of pointing and clicking. (Yes I know about `defaults write`, I tried to maintain a script to configure everything that way and similar for several years, things change every version, and it's a mess even when it works. It's not how they want you to do it, and it shows.)


I think part of the aversion is that we're seeing a generation come into being that doesn't understand that Unix > Linux.

The way that for Windows people Unix was "other" and bad and scary. Now we have legions of programmers who were brought up on Linux, and now think of Unix as "other."


Having had the fun experience of compiling a fairly heavy UI application on Unix, they all seem pretty "other" to me. Solaris didn't do anything weird, so it was maybe the only non-other. HP-UX had something really weird with linking and I feel like it was lacking some shell commands that were fairly standard. AIX did something strange with shared libraries and their error messages were decidedly non-standard, although they all had unique code at the beginning so at least it was easy to search for problems. I think AIX was the only one for which malloc(0) = 0, all the others at least produced a valid pointer. I can't remember what the problems with Irix were, I think it was just that by 2008 Irix was just old so getting an up to date compiler was troublesome. Linux was just as "other" compared to the rest, but it was increasingly full-featured. Solaris kept up for a while.

And admining them was definitely very different aside from the basic shell commands.


Because the software is also the nice bit


Well if you like both there's no problem is there.

Comment I replied to was 'I wouldn't pay for Apple hardware if I didn't want the software' implying that would be a stupid thing to do.

I prefer its hardware to anything else; I prefer Linux to macOS. So that's exactly what I'd want to pay for.


The hardware is the only good part unfortunately. I would have clicked buy faster then the flash could if the M1 could run Linux.


Isn't this pretty much an impossible ask, though? The hardware is great largely because Apple have invested so much in developing a custom SoC. But as a result, you can't easily run a generic OS on it. It's not like Apple just need to bridge the Linux jumper on the motherboard. Supporting Linux would require Apple to maintain millions of additional lines of code, and either hold themselves hostage to decisions made by the Linux kernel team, or maintain their own fork of Linux (which, aside from being based on BSD, is essentially how we got to Mac OS in the first place!)


Actually it would only require Apple to release internal documentation. There are enough Linux nerds to write all the drivers. Graphics will probably be the hardest.


"Only". It would probably be easier for them to maintain the Linux drivers themselves than to thoroughly document every feature of the SoC.

It's not just about individual drivers though, it's about the surrounding kernel infrastructure and the whole desktop experience. For example, getting instant suspend/resume working on Linux is not (I'm fairly sure) just a matter of writing a driver for a particular bit of hardware.


They don't need to anything new. What they already have is enough. Their own software engineers could handle it fine. It's good enough for people who want this.

There are people who have clean room implemented entire nvidia drivers. Without doc. We can manage fine with whatever incomplete doc Apple allready has.


It's been a few years since I used Linux so forgive me if I'm off base here. But last time I used Linux with Nvidia, you had the choice of using FOSS drivers with mediocre performance, or having closed source drivers that performed on-par with Windows.


You can't run Linux on Macbook Pros released after 2016 in any meaningful sense anyways...


And that's your choice. I would start looking at AMD's Ryzen offerings because supporting Linux is not going to be high on Apple's list going forward.


has it ever been?


I ran linux for years on my MacBook via Bootcamp. I'd be surprised if Bootcamp ever comes back.


Why?


You can, but need to sign the OS image.


What can you do on Linux that you can't do on macOS?


Automate the entire set up of my computer using a declarative language. I use NixOS. Mac OS isn't even close.


Have a TCP stack with synflood protection? (The mac stack was copied from FreeBSD in 2001, before syncookies/syncache were added, and not meaningfully pulled since)


I'm not surprised it's doing well.

Apple has deeper pockets than anyone else on the planet, and they have considerable experience doing this kind of thing –literally, decades.

Say what you will about Apple; this is a strong point for them.

But I'm still waiting for the M2 before I upgrade. I'm also interested in new form factors. Right now, they are still relying on the currently-tooled production line for their shells. They now have the ability to drastically change their forms.


I'd say Intel has the deepest pockets regarding CPU R&D, and yet they are being overtaken left and right.


I also feel like Intel's depth is also limited by the breath of CPUS they must develop. With every release they are shipping tons of specific sets of cores and clock speeds to meet their market. Then you have the raw investment in Fab that has turned out to be just lighting cash on fire for Intel. They make all kinds of claims and then fail over and over, plus they are hemorrhaging key talent. I think their soul really isn't in the game.

Apple has the luxury of building two or three chips total per year and simply funding TSMC fab. All of this is to fund the largest grossing annual product launch. If their chips fail at being world beaters, hundreds of billions of dollars are on the table. All in, Apple spends an incredible amount of money here, ~$1 billion. Per chip design shipped, Apple is probably spending much more but also getting their return on investment. It's such a tight integration that if TSMC were ever delayed by say, four months, I have no idea what Apple would do.

AMD is playing smart, fast and loose. Best chip CEO by a wide margin. AMD's gains really are on Apple's back, their chip design is brilliant and they get to reap the leftovers when Apple turns out their latest chip. They don't have to fund Fab, they don't have to make crazy claims to appear relevant like Intel does. They just ship great bang for the buck and the fab gains and their own hard work has given them best performance title too. Going Fabless was one of the most controversial choices ever made in the industry...and wow, was it the right move.


> Best chip CEO by a wide margin

Reminds me of this story: https://www.theregister.com/2018/04/16/amd_ceo_f1/


There is a natural progression where companies go from engineering-driven to finance-driven. It's usually a death march.


Intel's architectures have been massively delayed by process issues. They're still shipping Skylake (Aug 2015) architecture processors on the desktop and server because they waited too long to change strategy. About a year ago they announced they're going to start decoupling the microarchitecture from the manufacturing process. 2021 will be show the first fruits of that labor with Rocket Lake, which is Ice Lake (Sept 2019) backported to 14nm. If they had done that at the first sign of manufacturing trouble (2014?) they could have 2 more generations of IPC improvement and still be ahead in every way except efficiency. I guess Intel management was more concerned with not rocking the boat.


Probably right. I have family that works for Intel, and I have heard stories about the amenities and infrastructure (like "Air Intel," a fleet of corporate jets that take employees between Intel campuses).


Do you have an example of what they could do differently? Shrink the Mini, but what could you do with a laptop where the form is largely influenced by size of screen, needing it to sit up, keyboard and so on.


The obvious example that springs to mind, is a Mini in a TV box.

Also, the iMac could lose that bulge in the back.

MacBook pros could become only as thick as required for the keyboard, with the logic engine in the display section.

Batteries could become much smaller. We probably have good enough runtime, now, so it would be about reducing battery size.


This might get me back into the Apple ecosystem. I'll still wait for the kinks to be ironed out in the first generation though.


Likewise - so long as they don't ditch the headphone jack on this as well.


This, but I also hope they understand that most studio headphones have cable attached on the left side and come back to pre-touchbar era jack placement. With new, miniaturised components there should be enough internal space on left side.


I feel the same way. Very interested, but not going to go in for first gen.


This is the 12th generation Apple Silicon processor design.

A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, M1


I think this is 11th generation, where the A14 and M1 are the same generation. I expect we will see a few other chips from this generation, perhaps a A14X for iPad Pros and a M1X for bigger laptops and iMacs.


The issue is with macos running on apple silicon. There was a thread today somewhere with docker mentioning that they are still working on support for example.


First generation of MacOS on Apple Silicon?


iOS has always used the same kernel as macOS.


Sure, it's all Darwin, but userspace matters a lot - you can do lots of things on MacOS that you can't do on iOS (JIT, arbitrary web engines, assorted emulators, pop a root shell and load arbitrary kernel modules)


And that helps i.e. Docker how?


iOS XNU is compiled with different features than macOS XNU.


They are mostly the same.


Sure, but there are concerns beyond the chip itself.


I'm absolutely not worried about the silicon.

I am worried about the other hardware and MacOSX being total POS right now.


I'm not buying into Apple's eventually-closed desktop computer systems - I don't care what the performance is. They've been slowly marching towards iOS's closed ecosystem model on the Mac and with an in-house CPU, they can effectively lock users out of alternate OS choices on their hardware. Buyer beware.


I get why this idea of becoming iOS-like is uncomfortable for a lot of people, but will they take away the POSIX-ness of the OS? What about all the people that spend all their time in Vim, TMUX, Emacs, and/or zsh? I think these people will continue to be pretty happy on macOS if they already are.

In my experience, installing alternate OS's on Mac hardware has never been frictionless or satisfying anyway.


I don't see why they would do this. If they just want to sell iPads, they could do that tomorrow. Just release XCode for Linux and Windows so that devs can create iOS apps and call it a day.

Obviously they still intend for the Mac to remain a general purpose computer or they wouldn't be putting this much effort into it.


When GiorgioG speaks of a closed, iOS-style ecosystem, I don't think they mean literally running iOS on laptops.

Rather, they foresee OS X becoming a system where you can't run programs that haven't received Apple's blessing and been brought through Apple's store. Blessings that will be denied to software like youtube-dl.

As to why they would do this? A combination of the good of most users, who will enjoy protection from malware and viruses; and the irresistible temptation of a 30% cut of all sales.


People have been fear mongering this for a long time now without anything to show for it. People see the writing on the wall but honestly I don't really buy it.

They added System Integrity Protection (SIP) in El Capitan in the name of security which clamped down heavily on what people could do with their system, limiting hacks that allowed for modification of system applications, etc. Outwardly you can claim that this is a sign of them making it so people don't have control over their system. The reality is that it can be turned off by anyone that cares enough to do so.

They added checks to inhibit installation of unverified executables from the web. If you don't have a signature from Apple, you can't run it. That surely means they're taking control, right? Well, except that it doesn't actually stop you from installing the software.

They keep adding checks to inhibit users from endangering their system (such as much more granular permissions in Mojave/Catalina), but they have not made it impossible to execute any arbitrary code you want on their system if you want to.


Exactly. They still need something to develop iOS and MacOS themselves on, so unless they want to move all their internal lower-level development over to Windows or Linux (which, IMO, doesn't seem like something Apple would do), they'll need to continue producing something resembling a general purpose computer.


Why do you think Xcode won’t ever make it to iOS?


I wouldn't count on Apple leaving the PC market.

Tablet dominated world never materialized.

Post PC era isn't here, the PC is dominant still. Even iPad Pro got laptop like, than any laptop got iPad like. iPad's sales are either stalling or declining.

Yes - Apple clearly wants to keep that laptop market and be general enough to be useful. But general purpose is for general public, not your average HN reader.


> will they take away the POSIX-ness of the OS?

It's not only about POSIX. After the X years of planned lifetime (with proper software/OS updates), will there be any solution to extend the lifetime (which is what I used Linux for, on > 10 years-old laptops) ? I guess there will be no solution against planned obsolescence...

And with regards to control and privacy : will Apple finally give-up their policy of deciding (and tracking) "for your own good" which apps you are allowed to install and launch ?


>In my experience, installing alternate OS's on Mac hardware has never been frictionless or satisfying anyway.

It would be if they take some effort to support it.


> It would be if they take some effort to support it.

Oh, definitely. But the Linux (or alternate OS) fans are not really on Apple's radar. OTOH, they do a good job of keeping some core binaries up-to-date, like zsh and Vim, and they did appeal about getting good compile times during the M1 release event, so they consider POSIX users part of their target market.


There's no link between - compile time and "POSIX users".

Last I checked, I can compile and deploy an iOS app without the need for anything POSIX.


Bootcamp has always run fine for me. I've never tried to install Linux on my Macs.


As someone who sympathizes with your perspective, I do believe that this is a minority position. Most people don't care about closed ecosystems - just look at Facebook's popularity. Look at the Apple App Store. Government intervention would be needed to break up these closed gardens.


> I do believe that this is a minority position. Most people don't care about closed ecosystems

You're absolutely right. I bought my first MacBook Pro (17") in 2007 (and it's still running at my parents' house!) Over time all the machines in the house save my work/gaming rig have been replaced by Macs (iMacs, MBP, 12" MB, etc.)

Now I plan on reversing course. I'll keep my iPhone for now because everyone in the family lives in the blue bubbles (iMessage.)


Re: Reversing course.. what will you replace your Macs with?


The iMac will be replaced with a custom-built PC.

The MacBook Pros are another story. I don't know. I can't see me buying any new Intel Macs since they'll be phased out at some point.

I've never had much luck with the Dell XPSes. I may give ThinkPads a look (I have a P50 at my current job which has been fine (if not for all the corporate antimalware slowing it to a crawl.))


Thanks for the response. I'm hesitant to move my family away from Macs because they have been relatively easy to support.


I'm hesitant as well, for the same reason. Having said that my personal PC (desktop) is still running well 3+ years after initially installing Windows on it. My inlaws' very old laptop is still running fine (it's at least 8 years old and was upgraded (on accident!) from whatever version of Windows it was running prior to 10.) Aside from user errors (accidentally installing adware toolbars in Chrome, etc) there really hasn't been any real issues as it pertains to Windows itself.


I feel exactly the same way, but I do wonder how this gap is going to close. If Microsoft team up with another ARM vendor and make similarly closed-off, proprietary glue sandwiches as a response, where does that leave Linux? I doubt PC-compatible, x86 laptops are going to disappear off the face of the Earth anytime soon, but... if there's basically two types of machines, and ours have worse performance and half the battery life at twice the thickness (plus a fan, as a free bonus), a Linux machine is a tough sell to someone who hasn't already bought in. For the Linux workstation experience to keep up, we need more people coming in, and if new people can't dip their toes on hardware they already own, that raises the bar significantly.


Look at that new Raspberry Pi where everything is built into the keyboard unit like the old Apple 2 and Commodore 64. Linux’s future is brighter than ever. Linux hardware that is unique in its own right is much more interesting than trying to install Linux on PC or Mac hardware and beat it into submission.


You just have to look at the phone community (XDA Developers, etc) to see how this will (eventually) go.

A good example were the old Asus Transformer tablets. They were a super niche device, but it still lended itself to Linux and so a small team of people managed to load Ubuntu on it.

Another are Samsung phones. They try to lock people out, but they have popular enough devices that people find a way to put LineageOS on them.

Finally, even iPhones aren't immune. Small teams of people have managed to load Android on them and get it (partially) working. More people would give them even greater functionality.

If laptop manufacturers lock things down with ARM, there will be people who work around those mitigations and install their own OS on that hardware. Tooling will be developed to make that process easier and easier for the next round of people with that device (or future devices). It'll suck up front until the community grows large enough to work around issues faster and faster.

And that's even supposing worst case scenario. I'm not fully buying the idea that you _won't_ be able to change the OS on these laptops. Microsoft has tried (and failed) to lock other OS's out of their laptops. Chromebooks are (currently) the largest market of ARM laptops and you're able to change the OS on them. Apple might be the only company even remotely able to hinder freedom on their devices.

Either way. In the war on general computing, I'm generally optimistic for the users.


That's actually quite a depressing look - while the community can often get undocumented & closed user hostile hardware run their OS of choice, it's hardly ever as seamless as installing a modern Linux distro on about anything x86, usually without issues - mostly thanks to standards such as BIOS/UEFI, ACPI & others.

Also even if you liberate a single device, it does not mean all your hack will work on the next one - it's a never ending battle. And without making sure manufacturers actually respect some standards such as they do on x86, it might become a loosing battle long term...


Sure, the lockdown situation is worse than things are currently. You're likely to need per-device hacks that unlock it and enable freedom for users.

Even in that scenario though, ARM devices use standards too. There's a reason I can generally pick up any Android device and know what needs to be done to build my own OS for it. We just lack tooling that makes that incredibly easy and lack maintainers who want to make those devices work with the mainstream linux kernel.

Having open devices though (outside of Apple) is still my bet. We still need to make that process smoother but that just means there's lots of low hanging fruit :)


Yeah, postmarketOS is really trying, see the list of devices it can run on in some capability: https://wiki.postmarketos.org/wiki/Devices

Still I don't see this scaling unless more of the ARM stuff is standardized or upstreamed by manufacturers - IMHO there is simply not enough OSS developers being both willing and able to do the often menial yet necessary platform adaptation work.

For that reason I'm morehopeful about built-to-be-open hardware like the Pine Phone, as that could help reducing or removing the device support treadmill, so useful features can be actually developed. :)


I'm guessing we are moving towards the same system as embedded vendors use. Patch the Linux kernel so it runs. But never upstream anything so you will run around with an old os never to be updates again.


Exactly, I don't care whatever they do, I'm not buying into their closed garden of eden. I've already replaced Spotify with Funkwhale, and I'm using Linux, I'm not their target and I'll never be.


[flagged]


Thank you, for letting us know you appreciate his opinion.


Going on internet comments they've been marching towards this for years. I've yet to see anything actually happen.


> Going on internet comments

So instead of going on evidence and public statements by Apple executives - statements that have repeatedly said the Mac is the Mac - you choose to believe random internet comments?

There is a delicate balance between protecting average users who have no clue what is safe software and what isn't vs allowing power users and developers to do what they want. Since the days of ActiveX controls we've know if you give users a "Please pwn me" dialog they'll just click "OK". They've been trained that computers put up lots of pointless dialogs they can't understand even if they take the time to read them so just click until it gets out of the way. Even with default security settings if opening an app from an "unidentified developer" fails you can go into System Preferences > Security and click "Allow".

macOS is trying to protect people by default while still allowing the HN crowd to turn these protections off if they so wish.

Apple Silicon Macs still allow you to disable SIP which turns off a lot of modern protections. You can still downgrade boot security. It is a deliberate decision to continue allowing ad-hoc code signing. Software can still be distributed outside the Mac App Store either with a Developer ID or without. The vast majority of Mac users don't know or care what any of these things are but the Mac has always allowed them and as Craig has said several times over: the Mac is still the Mac. It is still the system that supports hobbyists and developers - people who sometimes want to poke at the system, install their own kernel extensions, etc.

If your complaint is that things are not wide-open by default anymore then I don't know what to tell you. We don't live in the same software landscape we once did and there are far more malicious actors out there. Protecting users by default is the right thing to do IMHO.


Are you sure you’ve replied to the right comment?

I was literally making the point that these rumours have persisted for years and nothing has ever come of it.

I couldn’t agree more with the rest of your comment!


My apologies, I did reply to the wrong comment!


Why can’t there be a solution between “I accept Apple’s vision for how I can use my device” and “turn all the security off”? Why can’t I run my own trustcache, or tell the OS to trust my developer certificate, or write little specially-entitled commands so I don’t have to turn off GateKeeper and SIP and AMFI. Can you really call a Mac a Mac if to do anything on it you have to completely roll back a decade of protections?


Eminent Internet speculation about Apple is correct surprisingly often.

EDIT: I'm getting downvoted, maybe because I was too cryptic. To clarify, it seems like the rumors that gain traction with mainstream sites and Apple-focused YouTube channels and forums tend to have strong correlation with something that will happen later.


This speculation has been going on for years.

They're not going to remove the ability to run arbitrary code or the unix core. There would literally be no reason to buy a mac over another product of theirs.


I suspect there is an element of truth to it. I agree they will keep the Unix core and allow you to use clang and ld however you want. I suspect, however, they're not interested in supporting alternate OS's nor are they interested in allowing typical users to download normie GUI apps from anywhere.


Oh absolutely, it's just not as extreme as people make out.

Yes, they won't support dual booting linux - you can run a VM, but if that's a dealbreaker - fair enough.

There's absolutely no chance however that savvy users will not be able to continue running non app store apps.

Apple want control, but they're not stupid enough to completely lock down the development machines for their entire ecosystem.


> Apple want control, but they're not stupid enough to completely lock down the development machines for their entire ecosystem.

There isn't much people can do about that though? You basically have to run their OS to develop apps for Apple products, and people who don't develop apps for Apple products don't use Mac's anyway. Just make a laptop appstore with everything you need to develop Apple products and they could force all programs to go on it and they would lose almost no users of their laptops.


>they can effectively lock users out of alternate OS choices on their hardware

People buy mac to use macOS. Some will also use bootcamp for windows if VM is not enough for their tasks. And installing linux instead of macOS on a mac even sounds strange.

So - nothing to be aware of. Macs always were build to run Apple OS


I'm with you. I'm not going to support this crap.

We might be in the minority at the moment but the harder Apple makes it to repair their machines by 3rd parties the less likely people are going to buy such an expensive machine in the future where a broken key means $500+ in repairs.


This is really impressive for (what I perceive to be) an entry-level computer. I always enjoy reading the technical rigor of AnandTech's deep dives, and seeing Apple's first chip go head to head with Intel and AMD's chips indicates the future is bright. I can't wait to see how the 16 inch MacBook Pro, the iMac replacement, and the Mac Pro replacement test out.


I'm not a big fan of Apple, but this makes me genuinely happy. Seeing discussions and benchmarks of CPUs where Intel isn't even a contender is fantastic. What a lovely day!


I think this will get interesting if/when MS, Nvidia and others start using ARM cpus more widely as well. Nvidia just bought ARM so that would help them get rid of Intel as a middle man for gaming hardware. MS already has windows running on ARM but that seems to be a budget laptops only kind of thing so far. Also they are shipping AMD on x-box, which is interesting. But you could see Nvidia building an SOC graphics + cpu running windows potentially. Most game engines already target IOS and Android so there should be no issues porting to ARM on that front either.

AMD ought to be paying attention. Risc V could be an alternative at this point if they want to push the market in a different direction. Having to license ARM from Nvidia would not be their dream scenario, I imagine.


AMD already licenses ARM. Ryzen CPUs have an on die ARM CPU for handling part of their platform security.


Intrigued by the single-thread main memory bandwidth being a multiple of what you get from a single SKX. We also see this with Graviton 2. The latency is not terrible, either. How would this much available bandwidth change your choices when optimizing your algorithms?


> "Whilst the software doesn’t offer the best the hardware can offer, with time, as developers migrate their applications to native Apple Silicon support, the ecosystem will flourish."

I think that more developers would be excited about migrating their apps to support native apple silicon if Apple wasn't so developer hostile at the moment. I am referring to stuff like Apple's Online Certificate Status Protocol (https://blog.jacopo.io/en/post/apple-ocsp/) and their Apple Tax war.

They need customers but they also need developers.


You should quit HN for a day or two. The stuff echo-chamber of HN repeats often has very little to do with reality. Apple has many thousands of developers. It is not hostile to them in any way, shape or form. For some reason this cliche is most often repeated by the people who never wrote a single line of code for iOS or MacOS


>It is not hostile to them in any way, shape or form. For some reason this cliche is most often repeated by the people who never wrote a single line of code for iOS or MacOS.

Maybe the word "hostile" is the wrong word. But I have apps on windows that ran on 95 that still work to this day without having to be "rewritten". It's no surprise it's often repeated by people who never wrote a single line code for an OS that they have come to expect will change things so dramatically that they will have to spend more time and effort supporting those OS changes and not creating software.


> But I have apps on windows that ran on 95 that still work to this day without having to be "rewritten".

And that's the reason why Apple is reaping these benefits, while moving to ARM is completely at odds with what Microsoft stands for and promises i.e. long term software backwards compatibility.


Wrong. Apple's iOS/macOS development footprint is far far far smaller than the footprint of generic software development using macOS. I have seen a lot of developers shitting the bed this week over the state of macOS both in person and in various other places.

We already have a couple of people who have got so fucked off with macOS their macs are running Ubuntu 24/7 and they're buying Dell/Lenovo next time. Hell I sold my Apple kit earlier this year because I was completely fed up of dealing with broken shit all the time. It's just a horrible experience.


Hey, I write code for those platforms, I think I have the right to say that Apple can very much be hostile if it wants to be.


I wouldn't care if their laptops have 10x the performance and power efficiency. Apple's closed ecosystem and "big brother knows best" mindset are bad enough on their own - but together, they are downright intolerable.


These are impressive performance stats and, given that I'm working from home all the time nowadays and with much less of a need for a laptop, the Mac Mini is actually a fairly attractive option.

Except for one thing: it's maxxed out at 16GB of unified RAM. 16GB. In 2020 (nearly 2021). FFS.

Come on Apple: get your act together. The 16GB limit was frustrating as hell when I bought my last MBP in 2015: now it's absolutely unforgiveable.

(The iMac obviously goes way beyond 16GB but isn't yet available with Apple silicon, and obviously the attraction with the Mini is the relatively ludicrous performance of that Apple silicon.)


They mixed up the GPU configurations in the article, currently it says the Mac Mini has 7 and 8 core configurations, but that's actually the MacBook Air that has the lower gpu option.


This is bitter sweet moment for me, I hate Apple's walled garden approach so much, but they have the hardware (CPU, speakers, screens, etc) side of the things down (mostly)...


1) Can I run VirtualBox on M1 (yet)? 2) What is the overhead of doing so with Rosetta2 vs native on Intel? 3) What is the situation with VT-X?


1) VirtualBox is strictly x86_64 only, everywhere.

3) Arm virtual machines only. For now, Parallels has a preview that you can enroll to at https://www.parallels.com/blogs/parallels-desktop-apple-sili... or you might use https://github.com/kendfinger/virtual which uses the high level Virtualization.framework, for Linux VMs.


Benchmarks show M1 with Rosetta2 beats previous Mac iterations in Cinebench benchmark. See page 2 here https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

>> What’s notable is the performance of the Rosetta2 run of the benchmark when in x86 mode, which is not only able to keep up with past Mac iterations but still also beat them.


There currently isn't any kind of useful virtualization available for Apple Silicon, and my hunch is that there won't be until ARM becomes more relevant in the Windows/Linux space.

The way Apple Silicon runs x86 apps through translation makes virtualising x86 systems either impossible or at least extremely difficult.


The answer to all three of your questions is "If you are worried about this, absolutely do not buy an M1 Mac." Rosetta 2 cannot magically turn VirtualBox from a virtualization management system into a high-performance x64 emulator. The long-term solution is probably going to be running ARM Windows or Linux in a VM and leaning on Rosetta-style compatibility/translation in the client OS to run x64 programs.

Edit: Since this is attracting downvotes, maybe it needs some clarification. The things OP asked about fundamentally cannot work. Rosetta 2 is designed exclusively for user-mode programs and cannot cooperate with virtualization software to run arbitrary OSes in VMs. VirtualBox has no plans to port to ARM and will not work in Rosetta. None of this is negativity or cynicism towards M1 Macs - it's just the reality of how switching architectures affects virtualization. If your use case for Mac hardware is to run arbitrary x64 code at high speed in VMs, you should not buy an M1 Mac because that capability does not currently exist.


Yeah, I figure the only realistic solution for my work needs (running a bunch of x86/x64 Windows VMs) is to do that remotely on a Windows workstation.

I probably won't buy an M1 anyway, but I'll be extremely interested to see what everything looks like when the M2 rolls around.


You may be interested in this thread:

https://forums.virtualbox.org/viewtopic.php?f=8&t=98742

tl;dr: VirtualBox is an x86/x86 hypervisor, there's no porting to do. It would be a re-write.


Thanks all


Is there a media ban? I thought we'd get a bunch of macbook reviews today. I guess wont be long, they'll be in the shops soon.


Today is the day the embargo drops it seems. So all the normal sites are posting their reviews.


I don't think there is enough test yet, why is it only Cinebench and Geekbench? Show us real test with ffmpeg, gcc etc ...



They state in the article:

> As we’ve had very little time with the Mac mini, and the fact that this not only is a macOS system, but a new Arm64-based macOS system, our usual benchmark choices that we tend to use aren’t really available to us.

I think most other benchmarks weren't compiled for MacOS on ARM yet.


There are multiple pages in the review with more benchmarks than the ones you mentioned.


There are benchmarks for SPEC INT 2006 which includes a gcc benchmark.


There are more pages behind that first page, see at the bottom of the article.


Does ffmpeg even compile on M1 yet?


Yes, seems so. Looking forward to check out how this will perform.

http://www.ffmpeg-archive.org/FFmpeg-on-Apple-Silicon-Succes...


I'm a bit surprised by their tagline "Integrated King, Discrete Rival".

I'm using an Acer Aspire V15 Nitro Black 15" from 2016. On Aztec Ruins Normal Offscreen, I get 270fps. So my 4 year old $800 laptop is still faster than the brand new M1. It seems Anandtech chose a very Apple-friendly set of laptops to compare to.


That would have you scoring higher than the Acer Nitro 5 (2020) with a 1650, so I doubt you're running the same benchmark.

This might be more accurate, 88fps:

https://gfxbench.com/device.jsp?benchmark=gfx50&os=Windows&a...


Lol. So that would be a +100% upgrade for fxtentacle.


So a 60W TDP dGPU (gtx 960M) is ~35% faster than this iGPU? I think this is what they called a Rival.


Agree. But don't you think this would come off as a lot less impressive if Anandtech had included all of the old rivals from 1-4 years ago that still rank above the M1?

"If you currently own a 2016 15" Acer, buying the new 2020 MacBook will be a downgrade." sounds pretty lame to me.

That's why I said they had a very Apple-friendly comparison set.


I don't think people are considering replacing a gaming 2kg+ laptop with a fanless macbook. Authors included two popular Turing dGPUs for comparison.


> But don't you think this would come off as a lot less impressive if Anandtech had included all of the old rivals from 1-4 years ago that still rank above the M1?

Not really. Why would you compare against old rivals instead of the current market? They had 1660 Ti's on the charts, too, which both obliterated the M1 & are not at all the high-end of discreet mobile GPUs.

The "discreet rival" was because the M1 was competing favorably against the discreet 1650 & 560(X). As in, entry-level discreet GPUs make increasingly less sense (they already weren't making much sense with Intel's new Xe and AMD's Vega 8 & 11 integrated, but more nails in that coffin with the M1)


Interesting for many gamers, Blizzard announced native support for World of Warcraft[0] on Apple Silicon. This give hope to other games from Blizzard coming to the platform as well and may encourage other developers to join in.

The M1 has been shown to run Civ6 and Rise to Tomb Raider through Rosetta faster than previous integrated GPU mac hardware[1]

[0]https://us.forums.blizzard.com/en/wow/t/mac-support-update-n...

[1]https://www.macworld.com/article/3597198/13-inch-macbook-pro...


I think I'm nearly ready to buy one of these... does anyone know of any benchmarks showing a Javascript test suite running? Be interesting to compare to to my current machine... the performance of node being good (and the sort of stop start choppy test suite stuff) would probably make me go for it.


Its starting to look like ARM is the way forward in terms of performance and battery life, and I feel PC's will soon follow in the next few years.

My only hope is this doesn't mean things get further locked down (such as being able to install linux distributions or dual boot) but I have a bad feeling they will.


Can AMD/Intel pivot to ARM and provide the efficiency benefits to the non-Apole ecosystem?

Can Apple's M1X/M2 outperform desktop CPUs?

Qualcomm tried their hand at desktop CPUs with Microsoft a few years back. Is it time they tried again?

How comparable is a Surface Go with the performance/efficiency of M1?


AMD has tried a bit, Intel probably won't. You can still get great numbers on desktop x86 with better cores and processes. Zen 4 is perfectly good so far. Apple's M1 already outperforms most desktop CPUs, it goes to reason that a model with more cores and bigger L1 could outperform the whole industry.

Surface Go is nowhere close, half as fast in single core, 1/5th as fast in multicore. The 5W TDP is really a generic number with no real meaning as Intel doesn't really abide by it, I would say it probably uses about the same power as the M1, possibly much more under turbo while also having a much higher power floor (IE: When at idle the Surface go uses much more power)

Keep in mind that the Surface Go is very low-cost and the CPU is at a 14nm build.


Would the M1 with more cores be able to beat the threadripper at the same wattage? Right now the M1 stands at a score of 8000 Vs Threadripper's 25000. The comparison I am sure is not just about comparing benchmark scores, but is there a prediction possible given that the M1 is at a 24W TDP whereas the threadripper has a 280W TDP (A 3x change in the benchmarks alongside a 10x change in TDP)

Does Qualcomm or Samsung have a M1 beater in their kitty?


It costs more to move 2 bytes into the CPU than to actually add them. As you go bigger, you spend an increasingly larger amount of time and energy moving data as opposed to actually calculating things.

Anandtech numbers showed 50-89% of total power consumption for the 7601 being used for Infinity Fabric. With 89w remaining spread among 32 cores, that's a mere 2.78w per core or 1.39w per thread at an all-core turbo of 2.7GHz.

Oh, I'd note that the 7601 is a 14nm Zen 1 part.

https://www.anandtech.com/show/13124/the-amd-threadripper-29...!


I’m really curious what Xcode and IntelliJ compile times look like for real world repos.

It’s a single use case, but by far the most common one for me where I genuinely feel productivity slowed by my computer. Hopefully good news there as well.


Dave2D had xcode benchmarks and it seems like... It's faster than the iMac Pro, and even faster than his friends hackintosh. Might be the fastest, or almost the fastest, out of Apples entire lineup.

https://youtu.be/XQ6vX6nmboU at minute 3


I'm looking this repo[1] for reference, something must be off, because a mid 2015 has half the time in incremental build, of course the project matters, but Dave doesn't reveal the name of the app they are compiling or I missed it.

[1] https://github.com/ashfurrow/xcode-hardware-performance


As someone using Jetbrains products, I would get no less than 32GB for my machine :)


This is a slight aside, but could someone more familiar with the field than me explain what the Neural Engine is for? Does any software currently use it, or is it something that Apple are trying to push? Is it designed to build ML models, or to apply them to data?

And perhaps more basically, what for? How much machine learning is done on user machines, as opposed to renting some cloud time? What are they anticipating will be done?

I ask these questions in genuine curiousity, I assume I'm missing something but this just seems like a rather wild divergence from what (very little) I knew of the ML field.


"Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption."

https://developer.apple.com/documentation/coreml

Apple has always been good at running ML on device, as opposed to Google's approach of sending everything to the cloud, then mining all your data, selling it, tracking it, etc. One example is iPhotos, which implements feature detection on-device, while Google implements it in the cloud.

https://www.imore.com/no-apples-machine-learning-engine-cant...

Or for Google: https://www.theringer.com/2017/5/25/16043842/google-photos-d...

(Reading this makes me wanna puke)


It’s accessible through the CoreML framework. I know nothing about this and very little about ML in general, but I found an interesting GitHub page written by a consultant who seems to know a fair amount about it.

https://github.com/hollance/neural-engine


It does matrix multiplication and stuff that is useful for neural networks.


It takes a lot of money, people, and investment to bring a chip to market, years ahead of other competitors' technologies.

Maybe locking it down (to the disappointment of the "everything open" crowd) is Apple's reward for advancing the state of the industry and offering high performing chip beyond anyone's expectations. Maybe for companies to survive in such a business, they need to take advantage of the technologies they're able to bring to fruition, and fund the 9/10 ideas that don't make it.


Interestingly Anandtech is feeding into the well deserved hype, by comparing 5950x to M1; Intel being 2 node generation behind still gives a good competition to M1 while still being a laptop chip.


This means there is (probably) plenty of opportunity for seasoned engineers to make serious money at Intel. But only for ones with thick skin who can deliver despite rotten company culture.


For very long time, the mac mini is attractive again with this new M1 performance. I feel like my Ryzen 2 sff build is old even though it's just less than 1 year.


Ryzen 2000 or Zen 2? The former is definitely more than an year old.


I meant Zen 2


We will have to wait for AMD ZEN4 based CPUs which will also like M1 be based on 5nm TSMC process.

Currently its apples (have to :>) versus oranges: 5nm M1 vs 7nm ZEN3


In reviews of the new M1 Macs as a whole, I'd like to see a comparison with Qualcomm's best: meaning the fastest current Windows ARM laptops. Up to now, their performance has at best only kept up with comparably priced x86 models, despite pressure on Qualcomm by Microsoft and the device makers. Maybe some new competition will shake Qualcomm out of its complacency...


In reviews of the new M1 Macs as a whole, I'd like to see a comparison with Qualcomm's best: meaning the fastest current Windows ARM laptops.

You may recall that Apple already embarrassed Qualcomm years ago when they shipped 64-bit ARM-based chips at least a year before Qualcomm could do it.

When the new iPhone ships, the next fastest phone is the iPhone being replaced by the new flagship phone. The Android phones based on Qualcomm’s best processors are way behind Apple's mid-level and entry-level phones.

The A series chips used in Apple's phones and tablets are way faster than anything Qualcomm is shipping for laptops, never mind the M series.


From what I've heard myself in the past, I don't think Qualcomm is as far behind as you say, but I would not be surprised if it was.

This puts me in mind of one time that I was ranting to a Microsoft colleague over lunch about how MS shouldn't be exclusive with Qualcomm for ARM chips given the very low rewards over the years. They said to me that when Windows Phone 7 first was under development in 2008-2010, choosing Qualcomm exclusivity seemed best because Qualcomm was the only one willing to make decent BSPs for us at Microsoft.

Upon reflection, I realized that this was still the case as of our conversation years later. The Mediateks, Samsungs, and Nvidias of the world either did not work with Microsoft at all, or got spurned by Microsoft themselves, or gave up after 1 or 2 high-profile failures (such as Surface RT). Texas Instruments was a notable exception as they gave up on ARM SoCs altogether, thus killing what would have become a TI-based Windows RT tablet platform.

Now neither me nor my coworker was in a position to actually know what was going on here, but I think this anecdote illustrates the value of a trustworthy business partner even when their products look mediocre.


Qualcomm was never far behind.

And the 64bit ARM Apple chip surprised even ARM themselves. As ARM didn't even have a reference Cortex design out when Apple shipped their first 64 bit SoC. ( Apple was part of early member programme ) And no one thought they will need 64bit so early. ( Which was also true at the time ).

Qualcomm has to optimise for cost, for the same Die Space Qualcomm already includes a Modem, while Apple has Modem as separate pcs of silicon. It isn't Qualcomm is technically subpar, they just have different objective and goals. And vendors are already calling foul for Qualcomm's continue increase in price. ( Which is actually normal due to the complexity of 5G, CPU, GPU and leading edge node development )


Apple's first 64-bit SoC was the A7 in 2013—7 years ago.

True, the modem is separate but Apple's SoC has the GPU, Neural Engine and other stuff.

Qualcomm seemingly has never caught up when it comes to raw performance.


>Apple's SoC has the GPU, Neural Engine and other stuff.

That is the same with Qualcomm.


It's funny to see Intel falling from 1st place in essentially 2 player game around two-three years ago to the 3rd place now


Is it safe to assume that future ASi CPUs for desktops will have just Firestorm cores and no Icestorm, which should further increase MT performance?

I know Apple was trying to get to market quickly, but I fail to understand why we need Icestorm cores in a non-mobile CPU, especially with this already (really) low TDP.


More likely they will ship more Firestorm cores and keep the Icestorm. Their future chip designs will likely be cross desktop/mobile. Keeping Icestorm lowers the cost in whole by allowing them to ship more chips and gives about a 30% performance gain in multicore.

Far more interesting to me is the idea that in heavy use, the Icestorm cores can run the OS, notifications and all that, allowing full uninterrupted use of the firestorm cores. Also when the mac is in idle it uses far less power.

Basically, I fail to see a reason to not keep them :).


> Far more interesting to me is the idea that in heavy use, the Icestorm cores can run the OS, notifications and all that, allowing full uninterrupted use of the firestorm cores. Also when the mac is in idle it uses far less power.

It's impressive what Apple has been able to do when they can fine tune macOS and ASi to work together.


while the M1 is impressive, everybody avoid the Elephant in the room which is: they dropped Windows bootcamp support which make it DOA for me. (and no a VM is not a replacement)

Zero media report it but there is a BIG performance issue with the M1:

Compilers and JIT have had decades of optimization both for ARM and x86. But actually no, code that targeted ARM historically only really was either C, C++, swift, js and Java.

All other mainstream languages such as C#, python, php, ruby, fortran, Perl, R, etc do have (I hope) ARM support but there have been almost zero human resources dedicated to optimizing the ARM codegen, and it will take years for a catch up. Where are such benchmarcks ?? This is a huge fundamental unaddressed topic! I even expect such languages to run faster on Rosetta than on ARM native, ironically!


> While the M1 is impressive, everybody avoid the Elephant in the room which is: they dropped Windows bootcamp support which make it DOA for me. (and no a VM is not a replacement).

I'm not quite sure if it's even a mouse in the room. While it certainly is a problem for you and a niche group of like-minded users, the vast majority of Mac users, including me, simply don't care about Windows support.

Plus, there is good hardware to be found on the Windows side as well, so it isn't the end of the world in my opinion.


Another source: https://wccftech.com/intel-and-amd-x86-mobility-cpus-destroy...

At least in multicore, all of the Ryzen CPUs beat the M1.


Those are only Cinebench benchmarks.

Have a look at SPEC2006 and 2017 benchmarks, M1 beating desktop class Ryzen 9 5950x, or just trailing behind (edit: in single threaded performance), keeping in mind cost of each and that:

> While AMD’s Zen3 still holds the leads in several workloads, we need to remind ourselves that this comes at a great cost in power consumption in the +49W range while the Apple M1 here is using 7-8W total device active power.


> the Apple M1 here is using 7-8W total device active power.

Anandtech showed almost 27W power draw under full load for the M1 Mini.


That's Anandtech's quote. M1 beats or trails behind Ryzen 9 5950x in single threaded performance, hence they mention 7-8W.

The 27W power draw comes from multi threaded performance. Ryzen's multi threaded power drain is at ~130W (as far as I know).


AnandTech (TFA) found the M1 performing very well compared to even Desktop-class Ryzen in SPEC: https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


I find it amusing people thought that apple silicon was going to be crap. Or that they would lie about it being good.

They are not insane! They wouldn’t jump ship and go through all that expense and possibility of failure if they don’t know they had something amazing at the end of the rainbow.


This may be a bit of a stretch, but would the power savings (value of which, Earth aside, could be measured in local electricity costs) of an M1 Mac be significant enough in a year to justify upgrading an otherwise functioning Intel Mac?


Nope. Say the differential under heavy CPU load is 10W, say you run it under full load for 2,000 hours per year (which nobody does on a laptop), then you saved about 20 kWh, or roughly $3.


It's interesting what this will do for the positioning of Mac Mini. If you want a desktop with decent performance in a compact package the Mini might just be first choice having languished with underpowered Intel CPUs for so long.


I am confused about one thing, in multithreaded scenarios, can an application use 8 threads or 4? Also how does the scheduling work, can I pin a task that I know will be demanding on the firestorm cores?


Applications can use all 8 cores. There is one benchmark in the article where they somehow turn off the efficiency cores to figure out they add about 30% to the total CPU power in multi-core.


Oh I see it. I missed their SPEC2017 page.


I feel like if one can hold out another year or two for the second iteration that will probably be a better long-term bet. My current macbook is about 6 years old.


Why did the single threaded cinebench single threaded measure against the Ryzen 5950x a high end desktop processor then the multithreaded version of the same benchmark only list the 4900HS a high end laptop part with a fraction of the thermal budget and half the cores?

Other sites had the Ryzen 5950x pegged at 28,641 in the multithreaded version vs 7833 for the m1 mac mini.

Its not really surprising that something with 4x as many high performance cores as the m1 with a much higher thermal budget is almost 4x faster than the m1.


The best option seems to be a hybrid.

Use the M1 Air to remote into a real non virtualized computer running linux.


Here is the most important question - how well it will run Linux? I do not like MacOS at all.


Should I be annoyed about buying a 2020 iMac 27" i7 5700XT 32 days ago?


I wouldn't be, the M1 isn't available in that form factor yet and even if it was it would be first gen, and in the meantime you still have a really great computer.


Do we already know if it has native AV1 decoding capabilities?


How did apple do this? And what does it mean for linux users?


What kind of graphics APIs will it support? OpenGL?


Graphics API support is a OS/Driver thing. OpenGL has been deprecated on MacOS for a long while now, being stuck on a old version (4.1). Apple refuse to support Vulkan also so the only officially supported Graphical API on MacOS is Apple Metal.


The proprietary Metal API: https://developer.apple.com/metal/

I think they might support OpenGL but I think everyone considers their support second rate.


Metal. Then there are OpenGL and Vulkan wrappers that run on Metal.


Good incentive to warm to Risc-V ecosystem.


It makes me very annoyed they aren't selling a minimal laptop version. I really don't get why they're doubling down on forcing the touch strip onto their laptops when nobody is demanding this and, thankfully, no competitor wants to acknowledge such a feature or the software burden requiring that hardware implies.


It seems that Apple kept its eyes on the goal of beating Intel, while underestimating AMD (like just about everyone else).

Still, now Apple has the #2 fastest CPU on the market and with different ISA. Intel....#3. Oh, how the mighty has fallen.

At least AMD won't get to rest on its laurels now, as Apple will definitely try to surpass Zen 4, too, now, or at least Zen 5.


Well, Apple released the M1 as their low-end chip, putting it in the entry-level slot of their low-power Macs.

They may have something with substantially more power in store soon.


The cheapest MacBook Air with an M1 chip is $1,000. The cheapest Mac Pro with an M1 chip is $1,300.

I don't consider a $1,000+ laptop "entry level" or "low end". These are high-end machines.


I don't consider a $1,000+ laptop "entry level" or "low end". These are high-end machines.

In the Apple ecosystem, a $1000 laptops are low-end devices.

The iMac Pro [1] and the Mac Pro [2] are high-end, professional level machines. The iMac Pro starts at $5,000; the Mac Pro at $6000.

The biggest difference is that Apple doesn't sell commodity hardware that virtually every PC OEM does. That was a deliberate choice many years ago.

BTW, the M1 Mac mini starts at $699 and blows away all the PCs in it's class, including those that cost more.

Some Hollywood studios have already talked about replacing much more expensive computers with the Mac mini because it's so fast [3]. No joke.

To be clear, you're not going to render a full-length movie in 4k on an M1-based Mac mini—that's what the Mac Pro is for. But for less demanding 4k editing tasks that would have been unthinkable on an under $1000 machine a year ago, certainly.

[1]: https://www.apple.com/shop/buy-mac/imac-pro

[2]: https://www.apple.com/shop/buy-mac/mac-pro

[3]: Hollywood thinks new Mac mini 'could be huge' for video editors: https://appleinsider.com/articles/20/11/12/hollywood-thinks-...


At some point you have to recognize that different product lines and market segments have different entry points. Otherwise you end up comparing everything to the Raspberry Pi, and every computer is "High-End".


Well, it depends

The Renault Zoe EV and Tesla Model 3 have the same price.

The Renault Zoe is very low end compared to a Tesla, what make them cost the same?

An EV includes technology that is very costly and even a middle end car ends up costing like a base offer in a higher segment (because the Zoe EV is the premium offer in their segment)

You can't get any lower than that

A Zoe with an ICE engine costs in fact 10k less.

The same exact car.

There is no equivalent for Tesla, Tesla does not make cars in that segment and even if they could, they won't do it.

Said in other words: a low end Mac costs and has specs of a high end machine

Highly castrated from the manufacturer (only 16GB of RAM tops?) but definitely not low end, not even for Apple

It's their base offer for the high end segment

Which is very different from saying it's a low end machine.

Their aren't low end, they are simply not premium (there isn't going to be a big difference in performances between the two, only a different positioning, equipment and less artificial limitations from the manufacturer)

They are like AMD K6 CPUs that you could overclock using a pencil

The conclusions of this review support the idea that the specs of the Mac mini are not far from what we could expect from the pro models

> In the new Macbook Pro, we expect the M1 to showcase similar, if not identical performance to what we’ve seen on the new Mac mini


High-end, maybe. Not high-performance. There are segments and use cases where Apple simply doesn't have an offering.


You may not consider it such, but they are the entry level Apple machines at the lowest end of their product range.


You have the Mac Mini, which is way below $1k, but for mobile macOS? yes, $1k is the minimum


That doesn't mean they are low end laptops. A rolls-royce is also not ever a low-end car. They are the cheapest products Apple offers. But they are still high-end.


I wonder what will happen when AMD launches 5nm processors.


It's always a slippery slope comparing future products to present day products. Apple has additional CPUs coming out over the next 2 years as well. It's going to be an interesting couple of years.


Apple will be releasing 3nm M3 processors that will probably still be faster.


So, it's like AMD, but different?

I am whelmed.


not exactly, the performance per watt is what's impressive here.


The last sentence on the first page has a typo, I think. I am guessing it would be `tough competition`, not `though competition`.


I never understand downvotes in this situation. Like are you mad someone is calling out typos? I mean I'm not surprised, I got downvotes for the literally the same thing [0] a few days ago when anandtech published another article full of typos... I like their reporting but the typos and the multi-page nonsense is a huge turnoff.

[0] https://news.ycombinator.com/item?id=25052892


I would assume the down votes are because this doesn’t really add to the discussion at all. If you spot typos why not notify the original website instead?


That's what I hoped to do with the original comment. I didn't want to send them an email for something this trivial and I was just leaving the comment in case anyone from Anandtech stumbled upon it. The typo is not really an issue and my comment was not a criticism. I was just trying to be helpful but I can see how that can appear to be when my comment is just about the typo and not the content of the post.


Honestly I don't mind. I was not commenting on the quality of the post, neither was the typo that much of an issue. I have seen people from Anandtech active in the hackernews and I just this is a good way of reaching them and letting them know something trivial with their posts.


Not very impressive considering 5nm process. But a good start. I expect impressive CPUs in the coming years with more cores and more TDP. Hopefully Mac Pro Mini rumours will be true. That will be strong candidate for my next computer.


Not very impressive a CPU at 7-8W power drain beats or just trails behind a desktop class $799 Ryzen 9 5950x at +49W consumption in single threaded performance?


Mac mini is a plugged computer. I don't care whether it drains 7W or 700W. Electricity is cheap. And the fact that it trails behind AMD on a better node means that its design is inferior or not fully uncapped.


Are you the kind that can't hear their computer fan because they wear headphones?

And/or the kind that keeps their laptop plugged in all the time?

Some people do care about cool, quiet and long battery life.


Actually it's very impressive.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: