Hacker News new | past | comments | ask | show | jobs | submit login
Apple to Apple Comparison: M1 Max vs. Intel (unum.cloud)
243 points by signa11 on Dec 24, 2021 | hide | past | favorite | 214 comments



I "downgraded" from a top spec Macbook Pro 2019 16" i9 9980HK with 64GB RAM to the new 14" M1 Pro with 32GB. Fans are at 0 RPM all the time, I haven't heard them yet. It compiles the same Android project without a sweat and with 0 RPM fan speed. The i9 was hitting 100C and maximum RPM in the same use case and took maybe three times longer. And the new macbook was about 25% cheaper than the 16" I bought in 2020. It's a completely different league - worth all the money I spent.


I went from a 2012 retina macbook pro to a maxed out (except storage) 14" M1 Max. The performance difference keeps blowing my mind on a daily basis. One Android project I sometimes work on took ~45 minutes for a release build on the old one but only 11 on the new one — and as it turns out I had spotlight indexing stuff in the background the whole time, and the SDK is still x86 so it runs through emulation. Fans do spin up sometimes but you have to try really hard to make it happen. On the old macbook the fans were spinning literally all the time and the CPU temperature has never fallen below around 60° on idle. That part of the case above the keyboard was always hot to the touch, on the M1 it's barely warm.


I'd be curious to hear how your old laptop could improve after getting cleaned of dust and with fresh thermal paste.


When I replaced the thermal paste on my Dell 7750 with Conductonaut Extreme my builds were no longer thermally throttling the CPU.


Do we have any thermal paste or other solution that works for 5+ years without degradation? ( I think I looked into this before but I forgot the answer )


My guess is thermal epoxy will work. But you'll never be able to separate the heat sink from what you attach it to.


I did clean it kinda regularly and did replace the thermal paste once. It stopped throttling when I put it on a stand to provide some airflow from below.


Part of that is Intel cannot get to low enough wattage on their mobile chips to sustain anything more than a quad core for very long. AMD is beating them but ARM is in a league of its own after decade+ of mobile focused development.

I'm excited to see how this changes x86 systems, mobile and desktop.


I assume 2021 and not 2012, right?


No, I do mean 2012. The first ever macbook with a retina display. I managed to skip all those troublesome models.


I'd expect a 10 year old laptop to be quite a bit slower than a new one, M1 or not.


I ran this build once again now making sure nothing heavy was running in the background. It took 11 minutes and 5 seconds. The fans did audibly spin up and all CPU cores were 100% loaded (asitop output: https://i.imgur.com/OY36RVB.png). Here's the breakdown of how long each build step took: https://i.imgur.com/TcdfA4L.png

Do keep in mind though that the NDK (which takes like 90% of the build time) is still built for x86. So the 4x difference is mighty impressive when you consider that you're comparing an old x86 CPU running its native code to ARM emulating x86, even if doing so via translation.


It will get even better once everything is arm native. Even the Java Sdk that comes with current version of android studio runs through Rosetta.


Hm, no, android studio itself runs natively, and the JDK I'm using is native too, from adoptium.net. Most android build tools (d8, r8, etc) are written in java so they run natively. Building the java part in any project is really fast. Like, freakishly fast. 3 seconds to compile a thousand classes, WAT?!


Ahh that’s awesome then. I thought the JDK that shipped with android studio was x86. I’ll try the one adoptium.


So a decade old MacBook is only 4x slower than the cutting edge one?


For single core workloads, probably. Didn’t we hit the clock speed cap around 2010?


Building a project isn't a single-core workload usually, though.


Isn't C++ compilation mostly single-core-bound still? Of course it does compile as many files in parallel as there are cores, but it's not possible to compile a single file in multiple threads.


Yeah but most projects that have compile times long enough to care about have many source files.


You haven't seen most Android projects that have a massive legacy module that Gradle spends 5 minutes compiling and doing nothing else. Seeing those 7 lines say IDLE is a daily experience


There is a lot more in single core performance then just clock speed, in fact ipc is even more important then clock speed when it comes to single core computing


Not surprising as Moore's law had stopped applying to CPU. Assuming each year CPU gets 20% improvement, from 2012 to 2021 the latest cpu is 5x more powerful than the old one. Considering 30% performance loss for converting x86 instructions to ARM, 4x is a reasonable number.


I did the same but to an Air 16gb. Much cheaper, mostly faster and doesn't have the quirks I had with the 16". I had it replaced 3x and it's was still shit; overheated and 3x just completely died... It was the most expensive computer I bought in since the 90s (Sun or SGI were more expensive) and it was a total horror show. Apple did give me new ones no problems but they never worked well for me; fans always on and burning so hot that it was uncomfortable to work on. This new air is really fabulous; always snappy and it does not heat up at all. It's actually a bit too cold.


The m1 air is the star of the show for me, because it is a machine that overdelivers to such a degree. I got one to replace my laptop, and to my surprise discovered that it outperforms my i5 9600K desktop, by a noticeable margin, without even getting warm. Also, it’s really nice to finally have actual all day battery life. For the brief window that our offices reopened I would leave my charger at home and still have 20% to 30% left by the end of the day.


Absolutely! I have the 16gb Air m1 and a fully loaded 16” M1 max - the max is collecting dust most of the time. For most of my workloads the Air is nearly as fast, completely silent and super duper lightweight.


The 9600k is an entry level CPU from 3 years ago. Personally I don't think it is surprising that it gets matched by a premium modern CPU. I do think there's something wrong with your desktop however, because in raw benchmarks the difference between an M1 and 9600k is minor - and it should not be a noticeable speed improvement unless something else in your configuration is causing problems.

Similarly, Intel's 1185g7 laptop CPU also goes toe to toe with the 9600k, with comparable performance to the M1 (albeit with worse efficiency). But the efficiency isn't a bother personally as the 1185g7 laptop still lasts all day for my usage. I do have an M1 MBP and have used them side by side, and I'm not really sold yet. Admittedly part of the problem is the locked down hardware and lack of support for Linux - but in the mean time I'm always reaching for the Intel laptop and do not miss the MBP.


> The 9600k is an entry level CPU from 3 years ago.

It's not entry-level. It's the fastest 6-core part from a product line that included 2, 4, 6 and 8-core desktop parts. Every model in that product line that was better than the 9600k could only be described as high-end.


Very much doubt the desktop has a configuration problem as it is set up the same as the laptop.

I think the difference is down to the single core performance edge that the m1 has. In theory multi core benchmarks should dictate the performance you get, but in practice it seems a lot of things, even development tools, depend a lot on single core performance.


I don't think entry-level means what you think it means.


A colleague is working on the same project with an Air 16GB and doesn't complain. It's crazy that it doesn't even have active cooling and can be used for serious CPU intensive work. The 16" model was a big Apple failure. I had to use it in clamshell mode with an external monitor because having the macbook open with an external monitor would make the macbook run a lot hotter and the fans would spin much louder all the time. That was a pretty well known problem with the GPU and they never fixed it (probably some HW issue).


Apple messed up the PCB/config that the GPU will automatically run at 18W (full tilt) when an external monitor + internal monitor is used. The fix in the later OSes was just to lower the power limit at 15W. That 16" was a really nice laptop in form factor and physical experience (monitor, keyboard, trackpad, and especially the speakers), but the that GPU issue + Intel heat and energy profile makes it an annoying computer to use.

I've transitioned to a 16GB M1 MBA too and it's a dream. CPU temps < 60C on dev loads on a hot day (I work outside). 16GB RAM means low swap usage (I used 8GB M1 MBA before this and it was still snappy except on the very limits). Still can drive a 4k monitor or mirrored Sidecar (with BetterDummy) without a peep. Building my work project takes the same time as the i7 16" MBP without the fan noise.


Exact same issues, I just changed my MB Pro Intel core i9 16" for a M1 Max 16", it's the biggest technological jump I have ever seen and the M1 Max is even cheaper.


I have a 16" M1 Max, fully loaded. The thermal profile on this thing is insane, here is a screenshot of the internals after being on for a couple of hours with a bunch of chrome tabs open https://imgur.com/a/yjXxdvJ its barely warmer than body temperature.


Also note for anyone on the fence about being 1-inch larger,

the 16 inch model is the same physical size as the 15-inch models and they removed a lot of bezel. the screen has the same camera/sensor notch as an iphone does, which allows them to extend the screen farther.

full screen videos are letterboxed 99% of the time so the notch blends into. I don't play full screen games so can't tell if thats an issue. and it is in the OS top bar the other rest of the time.


The 16 inch is slightly larger. I had them both for a week when switching and stacked them on top of each other to check.


oh yeah you might be right, I had skipped the donglebook models and compared to the last pre-M1 model that had magsafe and an sd card port

I had googled the size comparison before I purchased the M1 Max model and it helped move the needle for me, but I don't remember the source and we were likely comparing different models


Bunch of tabs idling should consume no CPU. Why would you expect it to be warm?


Doing something like code compilation would be a better test of thermals than opening a bunch of chrome tabs


I don't know - there are websites that will make my 2017 MBP generate more heat than the sun even if they are in the background.


Wow, thanks for the share! I had no idea there were that many temperature sensors throughout the board.


I tend to be around 55-60 degrees... Still great though, my previous mbp from 2019 was around 90


Exactly. Same for me (except 16GB). I am programming all day (multiple IDEA projects open) and have probably 200+ tabs open in Chrome and Safari. Not a sweat.

I keep saying, the best Apple advertisement are all other computers (and Windows/Android operating systems). It's like they're not even trying!


These are good real world examples, and the other comments here in confirmation of the performance and energy benefits are pushing me towards Asahi Linux when it is good enough/close enough to be roadworthy. At that moment, Apple can have my money for an M1 Pro.


I'm guessing that even once Asahi is usable enough for a daily driver, it'll still take some time until the efficiency and battery life are comparable.


Comparable to macos is one thing, but I'm looking at having the best performing Linux-based laptop so if efficiency and battery life on even early versions of Asahi Linux on M1 turn out to be quite superior to contemporary Intel-based laptops then that'll be a strong lure.


Yeah I would love to be able to buy the best performing Linux laptop as well.

I have been really happy with my ThinkPad carbon x1, but I am getting a little envious now.


Same experience with a M1 16GB. I have never felt such a jump in experience from a laptop to another. The M1 stays cold and silent and the battery lasts much longer, it's simply incredible. Even when I tried to throw games at it. It exceed my expectations.


I skipped the fan entirely and went from Macbook Pro Intel I9 to Macbook Air M1. Plenty of performance, much better battery time and size is more comfortable than the 14"/16".


The battery life is wild. Watched a full length movie on a 14” M1 the other day and checked the battery… 95%


Its such a nice experience to not hear fans or burn your lap with couch programming.


I went from a maxed out late 2019 to a maxed out M1 max (except ssd). The M1 is so much better it’s almost crazy and I can finally use my 2 LG 5k monitors again without the machine coming to a crawl. In a lot of ways the 2019 was worse than my previous one which I believe was the 2017 model. Glad Apple finally listened to what people wanted and made a laptop that works, added some ports back in and got rid of the terrible Touch Bar.


I met with a coworker recently who has the intel macbook and I was shocked to hear the amount of noise coming out of the thing while just running the basic programs we need to run. It was even discharging while plugged in until they closed enough stuff and turned docker off. Meanwhile I haven't had any issues with the M1.


That sounds like a laptop full of lint though


It's unfortunately not. The CPUs of the 2019 16" Intel MBPs are right at the thermal wall, and after removing so much mass, the Macs have no way to remove heat other than cycling fans extremely aggressively.

We hear almost nothing but complaints about them from long-time Mac users who are used to historically quieter devices. Under any load, like a few minutes of Zoom or Teams while connected to an external display, the fans are persistently and disconcertingly audible. It's become almost a meme in video conferences over the past two years when someone unmutes to speak and the fan noise overwhelms the noise suppression that you know they have a Mac.

Display artifacts and connection issues to USB-C displays aren't uncommon on the 2017-2019s either, but it still sounds like a torture test when running two 4k TB3 displays or a single 5k on one. The M1s are the first Macs I've recommend to friends and family in almost a decade.


Brand new Intel 16-inch MBP will blast fans doing even the simplest tasks. A single Twitch live stream open in the background. Running IntelliJ. Docked 4k monitor. Especially in clamshell mode, the fans will ramp up and stay ramped up, and the top row of the keyboard will get so hot that the machine is unusable without an external keyboard.

(I use one for work, it is terrible. And I used to have one for personal use, which I traded for a M1 Air)


Honestly, the thermals in intel macbooks were completely laughable.


I’m using TurboBoost Switcher Pro to battle this issue. It keeps the machine cool(er) and thus quiet(er). Combined with the VRM cooling mod, it is doable with heavy workloads. Battery life is still bad though and now the bottom gets very hot.


Yeah,we know. But consider the this: your baseline is a shit laptop. Apple knew they were releasing a turd. Maybe even did so semi-intentionally because that would make new chips seem even better.

My personal Ryzen 5800H powered Lenovo doesn't overheat and I haven't heard the fan I over a month despite doing all kinds of lightweight development and browsing on it.

Meanwhile, my fairly expensive work issued 6 core Intel Xeon Lenovo from 2019 sounds like it's about to take off throughout most of the day.


YMMV though, I briefly owned a 5900HS/3080 Zephyrus G15 and though it was indeed a powerhouse, it absolutely got more hot and way WAY more noisy than my 16” M1 Pro does now doing the same things, while overall not being as nice of a laptop (far worse screen, somewhat chintzy build, meh speakers, barrel power connector, huge power brick, bad battery life, etc).

That G15 was better at some things than my M1 Pro is now but not by enough that I’d want it over the M1 Pro, even considering the price difference.


what kinds of workloads are you running? I bought the exact same asus laptop to be a portable gaming machine, but I've been surprised at how nice it is for normal laptop stuff. unless I'm playing a game or compiling a large project, I just leave it in "silent" mode and never hear fans.

everything you say about the build quality is definitely true, but I couldn't find anything better that a) had a 3080/3070 and b) was actually in stock.


It was mainly gaming that would get it roaring, but compiling code among other things would spin the fans up a bit too. I didn’t put it in “silent” mode except when doing light things like web browsing because it felt like a waste of the hardware otherwise with how silent mode downclocks and limits boost speeds.


I guess I rationalize that all the money went towards higher binned parts that at least clock a few hundred mhz higher than ultrabook parts in silent mode. but yeah I get that. glad you were able to find a machine you're happy with :)


This thread is really interesting to read because if the same thread had been made a couple of years ago, before Apple released its M1 laptops, it would have been filled with Apple fans who would be saying that the Intel MBPs were the best thing since sliced bread and that the users were doing something wrong.

I think the only time Apple fans actually accepted that an Apple product was actually bad was the trashcan Mac Pro, and that was only because they went half a decade without upgrading it and they replaced a beloved design with one that was branded as a trash can.


> This thread is really interesting to read because if the same thread had been made a couple of years ago, before Apple released its M1 laptops, it would have been filled with Apple fans who would be saying that the Intel MBPs were the best thing since sliced bread and that the users were doing something wrong.

I work on an Apple laptop; all of my colleagues work on Apple laptops.

Maybe what you say is true in the spaces you find yourself in, but in all the spaces I chat about these things (Android developer community) none of your claims are true. The general feeling about the last generation of Intel MBPs was initially one of relief that they didn't f** up the keyboard, and then slow dawning disappointment as we realized that the thermal design of the machine was abysmal.

Up until this most recent M1Pro/Max iteration, things were getting so bad that many professionals I know were at least considering a move off of the Mac platform that most of us have been on for a decade or more. Many of us need to compile iOS code occasionally, of course, which means we're kind of stuck.

The M1Max has completely changed all that. One of the senior engineers I know actually put together some benchmarking numbers of compile times, assembled a spreadsheet of every engineer with a pending hardware upgrade, and added up the numbers for a budget ask to buy every single engineer at their company a top of the line 64GB M1Max. The final bill added up to hundreds of thousands of dollars, and just putting it together took a lot of time for this person.

It was approved!

So yeah, there's some real turkey on the bone here. It's not all fanboy hype.


There is only a small percentage of users of products who can be described as ‘fans’ in the sense that they will defend the products even in the face of overwhelming evidence that they have problems.

There is a much larger percentage of people for whom the world is not black and white, there is no picking of a ‘team’, and they will use whatever works for them or make do with what they have, moving between brands and models over time.

Please don’t push the ‘Apple fans’ narrative here. There are lots of us who are just trying to get stuff done and have civilised discussion, not engage in polarised mid slinging.


> Apple fans who would be saying that the Intel MBPs were the best thing since sliced bread and that the users were doing something wrong

Maybe if you only mean the 2015 models, sure. But models between 2015 and the M1s were trounced by everyone, even Apple fans, for their keyboard issues and throttling the i9.

I had a 2017 i7, and it was the worst Mac I ever owned.


Yeah, my Apple machine until June this year was a 2015 MBP and people were willing to pay almost full price for it until the M1s came out.


Hmm. I find the interesting part of this thread is all the deny at any cost comments. Apple has created a power/performance tradeoff that is way off the charts from the performance path that Intel was on. The deniers keep saying there must be lint in the Intel machine, Apple's thermal design wasn't good, etc. There is plenty of hard data out there with respect to numerous Intel based machines. Intel cannot come close to the power/performance point that Apple is now occupying.

Here is a different sort of data point. I have two Windows programs I need to run on my M1 Pro MB 16. One is an Intel binary and one has been recompiled to ARM. I am running them both under Win11 ARM on a Parallels virtual machine. I am getting better and more responsive performance than I ever did on dedicated Intel hardware. No noticeable fan noise or heat. It is really just a different beast than I have ever used before. Of course, native Mac apps are also much quicker.

Here is another data point from my own machines. 2019 i9 MB Pro Geekbench single core score is 1024. x86 Geekbench single core on M1 MB Pro 16" (x86 emulation) is 1347. Native ARM Geekbench on Win 10 virtual machine running on M1 Pro is 1560. Native single core Geekbench on M1 Pro is 1734. Multi core is 12400.

Look, Intel performance gains over the past decade have been incremental at best. It makes sense to be skeptical that Apple could change the rules like this. I suggest folks just get one of these machines and run their own benchmarks. Apple will give you a full refund, no questions asked for 14 days from the date you receive it. Don't just throw stones, try it yourself.


The last few years have been Apple fans pissed off about the terrible keyboard they threw in, the touchbar, and the notebooks getting too slim at the expense of performance and cost. I don't remember anyone hyping up the last-gen laptops.


I have a 2018 MBP with i9. The fans drive me crazy. I remember once I was in a meeting with my PhD student who was sitting across from me. The MBP was on my desk in clamshell mode placed in between us. Suddenly its fans started and my student jumped from the chair because the fan noise was so intense. It's been more than 3 years and the fans are still noisy. Just to be sure, there is no lint inside it. I cleaned it just the last week.


There is now doubt the M1 is very much cooler, but they also made the laptop much thicker to allow sufficient airflow without hitting the fans. I am pretty confident that this is the main reason to make it so much thicker than the Intel one


I also downgraded from the top 16” intel mac but to the 13 inch pro. I’m thinking I might keep it as it’s almost as powerful as the new M1 with much much longer battery (new 14 inch battery is comparable with older intel macbooks).


I'm on an M1 (8GB RAM) and it completes my Q/data science stuff faster than my desktop (through rosetta). I use it for fun mainly but wow!!


Though, i9 Macs were notoriously bad due to heat management, slower than i7 Macs in many applications.


While the M1 is definitely impressive, Apple has been known for their lacking cooling designs and weird performance curves back in the Intel days.

This comparison maybe Apples to Apples, but it's not really apples to apples. Modern Intel and even older Intel can bring a lot more to the table than the Mac Pro from a few years back could deliver because of Apple's design decisions. You could make the argument that you're just benchmarking Apple Macbooks to choose a platform to run macOS on, but then the datacenter CPU doesn't really add anything.

If you're going to throw a datacenter GPU into the comparison, at least grab a modern Intel chip for your benchmarks. The 9980HK is from two years ago, back when Intel was already losing ground to AMD. The 10th, 11th and 12th gen processors made significant performance improvements since then (12900HK performing twice as well in some benchmarks compared to the 9980HK), at the cost of thermals and power usage.

The M1 will probably still beat a modern Intel chip, but by a significantly smaller margin in terms of performance (battery usage, though, is a whole different story).


> At the cost of thermals and power usage

Isn’t this the whole point? Sure if you give Intel a bigger thermal envelope it will do better than the M1 but that really is Apples to oranges.


It’s down to brute force raw power and performance per watt nowadays. Apple mostly wins the latter but the former is inevitably going to kill demand for those climate unfriendly servers due to cooling and powering problems. We await the high end Mac systems with much interest.


Since those high-end Mac systems won't be able to run in datacenters, what's the point there? You're pretty much at the mercy of whatever Intel, AMD and other ARM manufacturers release.


Well not sure about your data center comment. But it has gotten those making purchasing decisions asking about ARM based and their alternatives. Apple doing ARM lends legitimacy to other ARM vendors where previously the topic could lead to ridicule.


Apple should license the M1 core architecture to someone to make server chips with it. They'd make some extra money and lose nothing... in fact they might gain since it would build market share around the architecture.


If the massive bandwidth shared CPU / GPU memory approach is optimum on the desktop for many workloads then one would think that there would be a sizeable market for that in the cloud - after all many desktop workloads (video editing) for example are moving to the cloud.

Can’t see it being Apple but feels like an opportunity for someone.


Because Apple wasn't interested in doing that some engineers left to found their own company doing Apple style cores for serves. It was called Nuvia. It was eventually bought by Qualcomm who decided to try to use those cores in Mobile despite the actual designs not being suited for that. I feel like I shouldn't just spill Charlie's scoop on exactly why[1] but it doesn't look good.

[1]https://semiaccurate.com/2021/12/01/how-is-the-qualcomm-nuvi...


My suggestion it to watch the investor day meeting along with Ananadtech reporting on the issue. As usual not sure what Charlies is on about.

>who decided to try to use those cores in Mobile despite the actual designs not being suited for that. I

Not sure where that idea came from. It was designed with similar wattage in mind as current A15.


So obvious, and yet apple is worth $3tr...


> Mac systems won’t be able to run in datacenters

https://aws.amazon.com/pm/ec2-mac/


From that link, the use cases are:

> Developing, building, testing, and signing iOS, iPadOS, macOS, WatchOS, and tvOS applications on the Xcode IDE.

I thought it was more for when you don't have a choice (Apple forces you to) than general usage. Are things going to change?


IIRC the MacOS EULA forbids anything else

but ..

When Linux boots natively on these systems that restriction will fall away.


These are seemingly MacOS only which doesn’t really qualify for the vast majority of data center uses. They’re here to build software for Apple devices and that’s pretty much it.


If your machine is perpetually sitting on your desk then that doesn't really matter. For personal use, power and thermals are only a serious concern away from your desk.

My 3950x runs circles around the M1 work has me use. I use both for the same task. Apples to apples.


But it’s one of those things where you might change your behaviour or priorities if something becomes possible. In this case, having all day battery life on a laptop while also having maximum performance.

Same way we started using laptops for all our work. Desktops are always more powerful in absolute terms and cheaper to boot. But most people don’t even buy desktops for home use.


> priorities if something becomes possible.

A CPU isn't the only reason there is to use a desktop/fixed setup. While Apple has truly fantastic monitors, there's also the keyboard to consider.


Horse is a lot faster than a motorcycle if all you've got for fuel is oats, hay, and water.

Adding in some gasoline would be apples-to-oranges.


More like a Tesla vs a gas guzzling rocket car. Rocket car may be technically faster under optimal conditions - but everything else about it sucks.


It's not clear to this reader which of Intel or Apple you mean for each of your examples.

"Everything else about it sucks" certainly seems like it doesn't apply to either.


I’d guess it’s obvious to others (and probably you) given one in the example is a lot more efficient, but for clarity the M1 Max was the Tesla.


Your analogy made perfect sense to me - I’m still trying to understand the whole horses and motorcycles thing!


Something that does not perform well when operated in an environment it is ill-suited to: trying to run a motorcycle on oats. Or an Intel processor on a laptop that sacrificed adequate cooling for aesthetics. Or a Tesla in a place without many electric charging stations.

None of these condemn the device, but do indicate the specific use case wasn't well thought out.


Depends if you feel the thermal constraints are inherent to the form factor, or simply sloppy engineering.


>Apple has been known for their lacking cooling designs and weird performance curves back in the Intel days.

Isn't that at least partly because Intel over promissed and under delivered.


My desktop rig went from a recent overclocked i5 32gb ram hackintosh to an 8gb ram m1 mini when they first came out, and the performance was mind blowing. I also grabbed an m1 mbp at the same time, which demolished my previous intel mbp but the hackintosh comparison is the one that sealed the debate for me. I had a massive cooler installed and yet the m1 mini was easily twice as fast.

I couldn't get my hands on the 16gb versions at launch, and for the first year they were still so much faster even when swapping 6-10gb but I am feeling the ram crunch a bit these days. I recently upgraded the rest of my team to the new 14 inch pros with 32gb ram and I'll be grabbing one for myself when I have the time next year to do a time machine transfer.


I found the i9 in macbook pro so severally throttled most likely because of heat that it underperforms compared to a 10 year old high end i7. The i9 in that from factor is pointless IMO.


I have an Intel I9 workstation machine, I use it mainly for developement. Let me tell you something: it's the most horrible machine I've ever had in terms of performance (based on compiling a very large C code base).


> If you're going to throw a datacenter GPU

Do you mean CPU?


The funny thing is that the Threadripper isn't even a DC/Server CPU, it's a workstation CPU. The whole comparison is undermined by the design choices.


It’s undermined by comparing a laptop to a $50,000 workstation? In that it is unfair to the workstation?


Sure, you can configure a cheaper server with just the hardware needed for this benchmark, this is just the fastest machine we have in our cluster RAM-wise, that has one NUMA node. That's why we added it as an extra point of reference. The core comparison is of course between two generations of MacBooks, as the title suggests.

Plus, $3000 on Macbooks isnt going to exclusively to CPU and RAM either. You buy the whole system. I am not even sure if we can find out the price of the M1 alone. I guess it will be around $500 for the M1 SoC vs $6000 for the Threadripper, still a 12x difference, even if we keep their power consumption out of the equation.


In a way, yes. They use a 3995WX with 128 threads but the benchmark is single threaded?

Also, 3995WX is definitely NUMA so their benchmark is skewing the results.


This benchmark isn’t single threaded, it utilizes all the physical cores found in the machine, 64 on Threadripper and 10 on Mac.

See the source for details: https://github.com/unum-cloud/HashTableBenchmark/blob/00f94c...


The 3995WX is a 6 grand chip. Not cheap, but you don't need to pay 50k for a workstation if you're going to compare it to a Macbook. That 1TB of RAM will be a much more significant part of the cost of the machine, but you could never get that in an M1 Mac so the cost is a useless metric.

I can't tell if these tests make use of multiple threads or not. I don't think they are, because the 128 threads should absolutely crush the 16 threads of the 9980HK.

If it's not, Apple's flagship chips are sorely losing to a chip that's been optimised to bring multicore performance by sacrificing single core performance. It'd be like bringing an F1 car to a dirt track and measuring performance against a rally car, and still losing to the F1. That wouldn't bode well for an M1 Mac Pro at all, unless there's some serious GPU power hidden in there that an AMD GPU wouldn't be able to provide at a similar price.


> It'd be like bringing an F1 car to a dirt track and measuring performance against a rally car, and still losing to the F1.

Huh?


Whether it is fair (or not) is a value judgement. All I said was that the comparison was done poorly.

Those choices together with single-threaded memory-intensive benchmarks (esp. those sensitive to memory latency) is problematic because the whole point of using a workstation CPU like the Threadripper Pro is for multi-threaded performance with a wide memory bus (sometimes NUMA). And they make a big deal out of it, emphasising the $50k price tag to paint the M1 Pro/Max as some sort of "Dragon Slayer". If they did a more rounded comparison including stuff like kernel compile times, etc. then maybe they are justified, but as it stands it's just unprofessional.


It would be utterly astonishing if the the MacBook outperformed the workstation on all benchmarks. The post doesn’t claim that it does.

The fact that the MacBook outperforms the workstation on some benchmarks is an interesting and, for some users, relevant datapoint. To point this out with full and detailed disclosure of the tests performed isn’t unprofessional at all.


Thanks a lot for support! It feels like some people forgot to read not just the title of the article, but also the subtitle…


I genuinely thought it was one of the most interesting articles on the M1 we’ve seen. Thanks for your work in putting it together - any follow ups would be much appreciated.


according to rumours, the upcoming Mac Pro chips will have 2x to 4x the core count of the M1 Pro/Max


On some intel CPUs charger included with Macbook does not not satisfy CPUs power consumption. So if you ran CPU on 100%, it will use battery to get extra power, and macbook will eventually die, after battery reaches 0%.

This is not how you design "workstation". Macbooks are just toys.


A non-zero percentage of the world GDP originates from work done on MacBooks, to differentiate it from most “toys”. I don’t have one so I’m not talking my (mac)book here.


[citation needed]

I've only ever seen this happen if you use the lower wattage charger from a 12" Macbook Air with a higher spec'd Macbook Pro.


This happened with my 2019 16" MBP (i9 8-core 2.3GHz). No charger, including Apple's own 96W, was enough when doing something heavy like CPU video encoding.

Last year I had to convert some blu-ray movies so I could stream them to my TV and it was common to go to bed with 100% and wake up at 20% or so. Throttling kicks in when the battery is too low, so I don't think it would shut down, but yeah, this was one of its flaws.

It also overheats and throttles very quickly (~1 minute) even with a more aggressive fan curve or even fans manually set to run at 100%. This can be managed by disabling Intel's Turbo Boost or by using Monterey's power saving mode, but that's not something one expects to do on a "Pro" machine.


While I don't agree with parent's declaration of all MacBooks being toys, that battery drain issue while being plugged in definitely did happen. Couldn't find a source for it just now, though. It used to happen on my 15" 2013 MacBook Pro when it was plugged in and I was playing games (BioShock Infinite was one such game).


It is easy to make my 2019 16" MBP consume more than the 96W that the included charger provides. Loading the CPU alone gets you close to that. Just add some discrete GPU work and there you go.

In fact, since connecting an external display enables the dGPU, the consumption can increase enough to get you over 96W with the CPU alone


Nope. The charger provided with 16" MBP (2019) can't delived the maximum required wattage to the MBP if everything (CPU, GPU, screen brightness) is at max usage. It's no issue for "everyday use" of most users, for sure, but for a pro machine...

Stupid, but true.


I almost had this happen on my M1 MacBook Air yesterday. I used the included charger on my M1 MacBook Air a few days ago. I was watching The Witcher Season 2 HEVC 4K HDR (so the highest quality) on it and had Cura (3D printing software) on in the background. I had the notification “MacBook is not charging”, since it was using power too fast apparently.

Not sure if it was draining but the battery wasn’t charging either. I closed Cura and it resolved the issue though, I went from 90% battery to 10% in about 2 hours though before plugging the laptop in.


This would happen on my late 2013 MBP using its original charger. Playing a CPU/GPU intensive game would result in a slow net draw from the battery.


Might also be worth pointing out that there is no mention of "battery" in that blog post.

The M1 Max is already a beast of a chip, and lots has been written on HN and elsewhere about how it knocks the socks off the competition. I'm not going to repeat all of the stuff that has already been written all over the internet.

But the most impressive party piece of the new M1s is surely that they can knock the socks off the competition whilst running on battery and can do so for a decently sustained period of time.

All of the remotely viable competition I am aware of needs to be plugged into the mains at all times otherwise you get a very throttled experience.


Apparently the Asus Zephyrus beats it on a number of benchmarks by a large margin, but I'm not sure if those were just GPU benchmarks, and I'm not sure if those were on battery tests. Linus Tech Tips did a recent comparison.


The Asus probably also runs hotter and requires more use of fans. And I doubt the battery will last as long on intensive tasks.

A big part of the M1 performance per watt efficiency is the SoC integrated design. I'm no motherboard designer but I think it would be nigh on impossible to replicate it using discrete component systems (which I assume is what the Asus is).

Will caveat my comment here with a note that I haven't looked at comparisons for a while, only around the time the M1 was launched.


It's always telling that these benchmarks are done vs. old Intel chips inside the Mac and not the current competitors like latest AMD Ryzen series.

Apple chips still win, but nowhere near with the same margin as those breathless marketing posts are trying to advertise.


Only the majority of people getting M1s are upgrading from macs with those old intel chips, so they're also the most relevant to the customer.


Nitpicks:

>How a DDR5-powered MacBook.....

LPDDR5, which was correctly labeled later in the article. LPDDR5 is not a variant of DDR5. Hence the title DDR5 powered is inaccurate.

>Cores have massive L2 blocks for a total of 28 MB of L2.

24MB + 4MB. Only 24MB is shared between HP Cores. There is also an additional 48MB of System Level Cache on Max ( 24MB on Pro ). So in reality the cache difference on Mac system is the reason for most ( but not all ) of the performance difference.

>Printing was done via the 5N TSMC lithography standard.

N5.

>Memory bus was upgraded to LPDDR5 and claims up to 400 GB/s of bandwidth.

On Max only. The Pro only has 200GB/s. As the previous headline claimed with both M1 Pro/Max. Agains this is probably nitpicking.


> On Max only. The Pro only has 200GB/s. As the previous headline claimed with M1 Pro/Max. Agains this is probably nitpicking.

That's already a huge amount for a bloody laptop, probably more relevant is that CPUs are not able to come anywhere near saturating memory bandwidths on the Max: Anandtech tested the memory subsystem[0] and it caps out at around 110 per performance cluster (which you only need 4 cores to reach, with 2 cores already coming close: individual cores reached 102GB/s each), plus around 20 for the efficiency cluster.

[0] https://www.anandtech.com/show/17024/apple-m1-max-performanc...


> That's already a huge amount for a bloody laptop

It's about on-par with modern-ish dedicated laptop GPUs. The mid-range GTX 1060 mobile from 2016 does 200GB/s, a RTX 3070 mobile from this year does ~450GB/s.


Sure, for the GPU that’s just pretty decent, but the real advantage is the amount of bandwidth available tot the CPU.


But its not available to the CPU. A couple posts up the chain has a link showing the CPUs are incapable of coming anywhere close to the full bandwidth of the memory.


Yes they are only really needed for the GPU. But then I dont want people to think M1 Pro has the same 400GB/s memory bandwidth as M1 Max just from reading the article. Again, nitpicking.


Ditto all the comments about swapping from 2019 MBP to 2021 MBP and being silent and powerful again.

The other big game changer is battery life. My top of the line 2019 i9 would chew through battery on Zoom. I could barely get 1.5 hours… which makes the term ‘laptop’ a little silly.

The 2021 can run long enough I don’t pay any attention to the battery, which is the point of a laptop - mobile computing.

General computing is even better, people didn’t even bother bringing their power supplies to work anymore with the M1. They could go all day doing development and the occasional meeting.

Power consumption and MagSafe are the hidden gems of the M1 models imho.


I never understood why zoom destroyed my work macbook pro's battery either. It was a max of about 90 min like you said. Seems ridiculous.


Right? How is video decoding not an optimized chip level operation on modern systems? It’s network traffic and video decoding… it shouldn’t required an i9 going full bore.


There's two components to a Zoom call:

- Encoding: your own camera input.

- Decoding: decoding multiple streams of video.

Your CPU needs to do work on each on these, and depending on what codecs are used (and whether your CPU supports it) hardware acceleration may or may not be supported.


For sure, appreciate the break out.

I am just shocked that as big as Zoom is, they don’t have an optimized client for MacBooks. I suppose the market is still far smaller than PC based systems… but come on, Zoom at el have the resources.


We Linux users get crushed by zoom, meet, teams, etc. as well. They’re all based on WebRTC and electron/chromium. We don’t get hardware decoding or encoding because Google doesn’t even want to attempt to support it. There are PRs for chromium going back 10 years.

I’m surprised WebRTC wouldn’t be hardware accelerated on macOS, especially considering the great lengths Apple has gone to to hide the hardware differences of the various machines behind CoreVideo, VideoToolbox, and AVFoundation.

Edit - For macOS, it may come down to the fact that WebRTC, being a Google project, prefers to use the VP9 codec. It doesn’t look like Apple enables the use of hardware accelerated VP9.


Thanks for sharing the specifics of the codecs in play. Definitely odd that something so prevalent isn’t accelerated.


Don't forget about the rootkit :-)


It's hard to imagine how laptops always lasted half a day from 20 years ago, despite plenty of technology innovations.

It must be the desire of users and developer laziness that chews all the batteries.


What really impressed me in this was not the expected M1 to older x86 performance gap (that poor i9 must have been throttling for the whole benchmark while the fans were preparing the laptop for takeoff) but the fact you can get a much faster laptop with 4x as much memory for about the same price from Apple just a couple years later…


The Intel chipsets were limited for a long time in that they could only support a maximum of 16GB of LPDDR ram. Yet another reason Apple must have been frustrated with Intel.


Is there a reason Intel put that limitation in place?


Product segmentation probably


Incompetence.


Of the management maybe, but it's not like the Intel staff who managed to pretty much stay competitive with the cutting edge despite them being stuck on 14nm for years forgot how to design a memory controller.


Make that market segmentation instead.

Intel intentionally limits consumer grade hardware in order to create distinct market segments - from overclocking to memory support and PCIe lanes.


True, but that kind of mismanagement is what caused Apple’s migration to Intel in the first place. While it made sense for IBM, losing Apple made no business sense for Intel, as Apple was a valuable showcase partner.


The price given in the article is not correct though, the 16" M1 Max 64GB is actually 3.900 USD (and I presume this is before sales tax, I'm from a country where advertised prices include VAT).


Did I miss it or did the author fail to mention the most important variable of this comparison, specifically, the frequency at which the memory of the AMD server CPU was operating at.

If you're doing memory benchmarks, why omit such essential details like memory frequency, as this can have a bigger impact here than the CPU itself?


The entire thing seems incredibly half-assed e.g. they're testing a 64 cores CPU on single-threaded workload (literally the only mentions of threads is that they "disable multithreading"), they repeatedly harp on the server being "over 50k" but the CPU is 5 so I can only assume most of the 50k are in the

> 1 TB of eight-channel DDR4 memory

which you apparently need to... bench hashmap lookups?

The mac's pricing seems pretty bullshit as well as the lowest price to get 64GB is $3500 (Max with 24 GPU cores, and a measly 512GB SSD), which does not qualify as "around $3000" in my books.


Its configured with the fastest memory possible on the Threadripper. The exact specs and RAM stick models match the ones mentioned in the preceding article, I have linked below. We didnt include those, as we don’t know the exact specs of RAM modules in MacBooks.

https://unum.cloud/post/2021-11-25-ycsb/#our-toys


Setting aside the fact that Threadripper is not really a server CPU, it tops at DDR4-3200 when it comes to memory support. Considering the custom build, it seems the most likely option. (As in, why would it use anything slower for no good reason?)


The Threadripper Pro has 1TB memory, which means it's likely ECC registered, which is slower at the same frequency, and it might not even be DDR4-3200 in the first place because it's rather new in ECC, and server hardware manufacturers like to use old stuff.


Funny that everyone does benchmarks and goes ooh and aah at the bigger numbers. Or complains about apples to hazelnuts.

How about this: I do NOT want a space heater on my desk or under my desk. I have a separate heating system thank you. Looks like the MxBlaBla chips can deliver good performance and stay cool.

Now if Apple would hurry with the MxBlaBla desktops... I don't need a new laptop atm.


I have the opposite complaint. I couldn’t care less about heat or battery life. I want the fastest possible machine, period. And too many articles about M1 discuss heat and efficiency which I just don’t care about.


So, just build a desktop or buy a huge laptop.


I did. My work desktop is a Threadripper. A++++ would recommend.

The fact that a laptop is even in the same discussion as desktop chips is crazy. The M1 is a seriously impressive piece of hardware.

End of the day all I care about is how fast my chip runs. Different people care about different things, and that’s fine.

I’m generally annoyed by reviews that don’t clearly compare desktop vs laptops parts. This is especially true for GPUs. Just give me all the data and I’ll decide if I care about price or energy use. It’s remarkably difficult to find a comparison between a laptop GPU and a desktop GPU. Which is an important comparison if you’re trying to decide between a laptop or a deskto.


I’ve been wanting to build a threadripper workstation for a while - or at least a top-tier Ryzen. My experience jumping from my loaded 2018 MBP (Core i7, Radeon, 32GB memory) to an M1 mini w/ 1/2 the RAM has caused a paradigm shift in the way I think about computing.

I abuse my machines too. I’m a freelance polyglot so on any given hour of the day I am working on Python, JS, design work, Clojure, Elixir, Ruby, scripting indesign, etc. One of the systems I operate runs millions of domain analysis operations a day and I tend to run a scaled-down version of the real system when developing. I do that for all clients, really.

The MacBook Pro must essentially be viewed as a desktop. The fans are always on full blast. The disk access is slower so Docker container file system sync tends to suffer. It’s not a terrible machine in and of itself, but once I “upgraded” the difference was night and day.

My M1 Mini has 16gb of ram but is otherwise stock. Assuming native ARM execution for a container, Docker is significantly faster at most things. Everything is faster. Running an Intel build of VSCode is the only thing that has ever stressed it, but I’ve never heard the fan. Disk performance is very impressive for such a commodity-grade machine.

We just deployed one in a client site to perform Adobe InDesign imposition “as a service” for an on-demand print factory. It’s handling the work like a champ!

As much as I’d love a new ARM MacBook Pro, the chassis is too big for me. Sounds petty, but the OG MacBook Air chassis is the ultimate mobile computer for me.

So I think I’ll hold off and wait until Apple unveils their Mac Pro replacement. If we extrapolate things out even just a little - it should be a monster.


In summary, apple made a better product than a two your old mac. 64 GB DDR5 M1 vs 16 GB DDR4 overheating i9 mac from 2019.

This is indeed Apple to Apple, but not apples to apples.


I haven't seen an M1Max yet.. but I've had a ton of 15", 17", and 16" MBP models over the last 10 years at work. All have generally been maxed out i7 models up to the latest one which is an i9 I got about 3-4 months ago right before the ARM ones came out.

I'm totally on board with the narrative that the MacBook Pro has been going downhill for a long time and going to ARM is the ray of hope. At least in the case of my i9 16" they finally fixed the keyboard but it's a ridiculous space heater and it's only about 2x as fast as the 2018 i7 it replaced on a lot of tasks, and that's probably only because I was RAM limited on the 2018 one and am not on the 2021 one. The fan drives me bonkers as it pretty much runs full blast all day while I'm working. Lots of us on our team have seen the battery draining while it's plugged into the wall.

Some of these benchmark games miss the point, the user experience for a "mobile workstation" user has just been going downhill for a long time and these new machines are finally starting to turn that around.

I've had as 13" M1 MBP since launch last year FWIW and have never heard the fan come on, other than the fact it has a Touch Bar it's just about perfect for a personal machine. It feels much faster than the i9 for most real world tasks and the battery life resembles a kindle more than a laptop.


I’ve always wondered, how much of the M1 gains comes from ALL components (not just the CPU but more importantly also the GPU, controllers, etc) on its SoC being fab’ed on the smallest industry node size that competitors can’t use yet.

It’s not uncommon on x86 platforms for GPUs, memory controllers, etc to run on 1-2 generation old nodes while the CPU uses the latest tech.


Sure, the fab size is part of the story, but we have also seen a general tendency for efficiency from Arm chips and the reason that they are used in mobile devices. Apple has also been willing to spend money on larger caches and wider buses in their SOCs to remove bottlenecks. Finally, Apple tunes the processor performance to optimize for processes that Mac OS uses a lot. That is possible because of their tight vertical integration and I have not seen on the more heterogenous Windows/Linux + Intel environment.


This is a key difference. The amount of power required to drive bond pad loads can be substantial and there's a major speed hit because you have to drive the larger capacitance of the load which only can be juiced with more power. Keeping it all on-chip is definitely a major factor because you have none of this sucking up power and speed (at the trade of power).


The new 12900HK from Intel beats M1 Pro in both single core performance and multicore performance:

https://news.in-24.com/technology/414335.html

TDP: 35 - 45 watt

but of course, it's better to compare with 3 years old CPU, so the results will be much more impressive.


> The Lenovo notebook with a Core i9-12900HK achieved 1878 points in the single-threading run of Geekbench 5.4.3, and 12,058 points in the multi-threading test.

> A 14-inch MacBook Pro manages around 1750 (single threading) or 12,600 points (multithreading)

Higher in single-threaded only.


Yes, on this laptop, only single core is faster, but on some other laptops the overall performance of the intel CPU is higher:

https://www.tweaktown.com/news/82364/intel-core-i9-12900hk-m...

On this laptop it achieves > 13000 points in Geekbench Multicore.


that just means that it's getting faster, while still drawing a lot more power.

My older i9-based Mac was additionally having thermal problems when the AMD graphics chip was used, which added another 20+ watts to the power consumption: use of external screens, video conferencing, ...

The new MacBook Pro happens to vastly improve the power consumption of the CPU, the powerful GPU and the Thunderbolt interface.


"$50K" is so vague and not useful for comparison.....


AMD Threadripper Pro 3995WX with 1 TB of eight-channel DDR4 memory


Am I the only one not seeing legends in the bar charts?


Tiny, on top of the bars.


I'd really love to find out how much of the m1's performance is due to the advantage of having a custom architecture designed to make Apple's refcounting mania actually perform well. I.e. somehow run all their arm code in a way that emulates the M1 instructions on a regular ARM chip with similar clocks, cache etc... probably very hard to do. Maybe you could patch a binary that came out of Apple's compiler to not use the new instructions? The atomics they use to do refcounting definitely have a measurable performance impact on x86.


Nah, most of the performance comes from:

- Massive caches

- Really really good power efficiency due to good manufacturing node

- Massive RAM bandwidth

- Very fast I/O performance of the builtin SSD

It's a big, expensive chip to manufacture and it pays off with overall performance.


There are no "special" M1 refcounting instructions, they're just ARMv8.1 LSE atomics made to go really fast. (You can confirm this pretty easily: objc_retain uses cas.)


Yep, I think it's interesting to note how slow x86 was in general with refcounting even compared to the PowerPC era.

It's something that showed early on quite a lot when you compared what was considered light usage at that point in time (say open Word and a couple of Safari tabs), a G4 wasn't ridiculous in those scenarios compared to the first Core 2 machines.

While I'm sure the designers paid particular attention to those instructions, it's really x86 being a terrible instruction set than the other way around that create that particular refcounting gap I think.


I think that’s the OP’s point? Apple optimised the snot out of their chip for the code they run.

Vertical integration works, in other words?


> somehow run all their arm code in a way that emulates the M1 instructions on a regular ARM chip

The comment gave me the impression that the author may believe M1 has some special instructions for reference counting; it does not.


Sure, vertical integration is nice, but why is it so hard - maybe even impossible, to achieve similar results without it? Windows is such an entrenched ecosystem with millions of PCs running it. The code running on most of them isn't complete blackbox. I find weird that Intel and AMD are not able, despite their cooperation with Microsoft, to achieve similar kinds of results.


Are there any 5nm CPUs available yet for x86?


This isn’t a recent problem. Sometimes those companies work together but they often are at cross purposes because they each have different incentives whereas at Apple they all report to the same person.


It's a top-down versus decentralized, market-driven approach, with all the upsides and downsides of each.


AMD and Apple aren't playing on a level field currently. We'll see how well the 5nm AMDs with the 3d stacked cache perform when they come out next year.


More likely, being able to use a new architecture that isn’t burdened with decades of legacy compatibility works better than an architecture that is burdened with decades of backward compatibility.


ARM isn’t new and hasn’t been a performance champ for most of the decades it’s been around. The other ARM server implementations tended to be slower but better at perf/watt or perf/dollar — Apple changed that with absolute performance wins as well. That suggests that it’s more the execution by their team than an innate architectural advantage, esophagus given how many other cleaner architectures were wiped out by x86.


That's not correct. Architecture generally refers to the ISA and the gp comment referred to special M1 instructions.


Maybe I'm naive, but I really haven't seen a reason to jump off my core i7 6600 lenovo laptop or 4770k desktop, neither of which spin fans or kill batteries. I can run xilinx vivado, kicad, and gcc just as fine as a newer 18 core i9 I have for work. Having used a mac air with an intel chip which is incredibly slow in comparison I have to wonder if the issue isn't mac os x itself rather than the hardware involved.


I an very happy with Apple doing that, but also keeping my fingers crossed for Intel. This company slept for too long, but I still think it is critical to have. And with these butkicks I hope they are going to show what they have. Of course... it will take some time. But we need competition, tsmc shouldn't have it all


Anyone cares to share how M1s performs with virtualization tasks? Say Windows desktop or Linux cli?


Note that while people here are gushing over performance, it's worth mentioning that you need to run ARM64 virtualized OSes (both Windows and Linux) for it to work.

Getting Intel x64/x86 VMs to work is... fun and they don't perform nearly as well as you'd expect from the gushing praise.

So double check if your virtualization use-case actually is fine with running ARM guests.


Docker runs a linux VM and it destroys intel macbooks. I hardly notice it running on the M1. Flawless experience with minimal power drain.


A coworker of mine is seeing their M1 performing about 2-4x slower than my Intel MacBook Pro where Docker is invoked. So YMMV


That sounds like they’re running x86 containers. I use Podman amd it’s very fast with ARM but notably slower when emulating x86 and there’s at least one case where this hit a JVM race condition in some old vendor code which still uses JDK8.


I'm able to use arm containers for my work and it doesn't seem any slower than what I'd expect from native performance on my M1.


Ditto: slowness is my cue to rebuild the container for both architectures


I'm using vmware fusion to run ubuntu vms for my daily work. The virtualized 20.04 aarch64 is on par with my x64 desktop (5900x from amd). Surprisingly in both, ST and MT loads. The laptop never runs out of juice at the end of the day when I'm on battery. Also there's no perf. penalty for disconnecting the charger.


Pretty good but note the M1 chip does not support nested virtualization. So you can run Windows/ARM but you can't run Docker, VMWARE or VirtualBox inside that.


Flawlessly.


Coming from an Intel Macbook user to migrated to the M1 Air and then eventually to the M1 Pro I have to say the current Apple Silicon Macs are excellent for productivity and casual use.

But for anyone interested in gaming you either get a console or need another gaming rig.


Casual gaming is fine on them for me at least. Magic Arena or Civ5, performs well and totally quiet.

I know this isn't most or even many gamers, but it's the gaming I did on my intel MacBook and it's way better on M1.


This is just the beginning and there is still overlap with new Intel chips (though not at the same power consumption).

I'm very excited for what future products (M2/M3 - Pro / Max) can do in terms of efficiency and performance.


For the "Copying Memory" test - isn't the page size on M1 something non-standard? It may be worth it to also try 8KB and 16kB block sizes.


Is it worth getting the 14” MBP M1 Max or do you absolutely need 16” because of the wattage?

The 16” seems really too big and heavy for me and I use 3090 for the GPU stuff.


Watch the latest Linus Tech Tips video about this.


Suggest learning about Unified Memory Architecture (history, features and limitations) --not so much benchmarks of "$50K Server" from Armenia.

TL;DR: If you're okay with the non-Windows world, there are few reasons (other than GPU performance) to buy new Intel or Chinese processors for consumer apps or professional bit-banging (image/video/audio/machine-learning).

Likewise, there is NO reason to buy Apple SoC processors for mission critical IT (boxes that run backups, your CFO's workstation that still crunches the Board's spreadsheets or other client-side databases including QuickBooks) --those will all need to be on Intel Xeon processors with ECC memory and a battery/UPS until migrated to the Cloud.

More than a year from now, I expect major Chinese manufacturers will release cheaper (RISC V based) UMA designs (which Apple is rumored to be exploring too) and more than two years from now, I expect Intel to release more expensive (Core based) UMA designs (for anyone wanting Windows). I think it is a foregone conclusion Microsoft will get into chip manufacturing (to diversify from Intel and Qualcomm and compete with Amazon and Google) too.


Anyone else having their M1 freeze sometimes?


Yea. This article is about the hardware which is great. The freezes, memory leaks and bugs are from the software and the latest mac os software really needs more time in the oven. I can't use firefox any more because WindowServer will eat up over 100% cpu and start allocating ram and and never releasing it until a crash or restart.


I have an Air M1 and it definitely hiccups once in a while. It is noticeable. I am guessing it is Rosetta related.


Tim Cook grabbed the infinity gauntlet and said “Fine, I’ll do it myself”.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: