Hacker News new | past | comments | ask | show | jobs | submit login

Only in single threaded performance, which nobody actually uses for rendering.

In multi-threaded, the Ryzen 5950X is at 28,641 while M1 is at 7,833. So no, the Mac Mini is maxing out at 27% of the Ryzen 5950X if you use it properly. And I was already friendly and used the M1 number for a native port, while in reality you'll likely need Rosetta and take a 33% performance hit.




I think the overall point is that for the average user, who doesn't need all those cores or could make good use of them, the M1 may in fact feel / be faster.

For users like you or I, of course we'd see a huge difference, but not everyone is running workloads that need more than 2 or 4 cores.


An average user is going to buy a 5600X or whatever not the 5950X, and the 5600X's single-threaded performance is barely behind the 5950X. You only get a 5950X if you want multi-threaded performance.


I have a theory on why ST perf is always the most important metric for me, and some other folks. When you're waiting for something synchronously, like rendering a webpage, stuff to open, etc. you're usually running a ST load. For stuff that can benefit from multithreading it's usually planned task. So does it make a difference if it takes 4 minutes compared to 3? You will still context switch.


Right now I have ~20 tabs open and a few apps, a workload which is probably similar to the average user. My machine currently has 510 processes running with 2379 threads, though most of them are background. I'd wager core count is more important than ST performance nowadays, especially considering the fact that applications seem to be multicore optimized.


I’d check your activity monitor to see how many of those are sleeping. My suspicion is that most of them probably are, likely to the point where you are using “less than a core” to handle the load.


For users like you or I, of course we'd see a huge difference, but not everyone is running workloads that need more than 2 or 4 cores.

It’s hard to imagine a regular person playing games or editing the family photos or editing the kid's birthday party videos aren't using multiple cores for almost everything they do.

Even browsing the web these days uses multiple cores.

Apple wouldn't have made the investment if people couldn't see and feel real world results.


Just using a web browser these days requires many threads and processes to run at once.


Depends on what you're doing. For example, compiling is multi-core, but linking is normally single-core. Many workloads are still heavily single-core-dependent, so great single-core performance is still a big asset.


> linking is normally single-core.

GNU gold was doing threaded linking 15 years ago, and nowadays threaded linking is the default for new linkers like LLVM's lld. Unless you use very specific GNU linker hacks, there aren't any reason to not use lld, it works fine for linking large software like LLVM/Clang, Qt, ffmpeg...


Parts of the linker basically have to run in a single core, though.


Yeah, but this is a laptop chip at ~20W. Of course it's not going to compete with a 16-core 120W monster.

Getting 1/4 of the performance with 1/4 of the (high perf) cores and 1/6 of the power is very impressive.


But the Ryzen 5950X has 16 cores while the M1 has only 4 high performance and 4 low performance cores. So the Ryzen gets 4x multi-core performance with 4x the cores.


I wonder if Apple will bother to produce a CPU with desktop-level TDP. That would really compete with the Ryzens.


I really hope so. And I’d think they’d want something to put in their new iMac, which could be a CPU with 16 M1-cores resulting in a 100 watt TDP.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: