Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One, this is repeating the iPhone vs Android comparisons. iPhones with 4GB RAM feel faster and get more work done than Androids running Qualcomm ARM with double that amount of RAM. The faster IO becomes the cheaper paging becomes, and macOS and iOS have a lot of work done to handle paging well.

Two, this is the entry level processor, made for the Air, which is what we get for students, non-technical family members and spare machines. Let’s see what the “pro” version of this is, the M1X or whatever. We already know this chip isn’t going to go as is into the 16 inch MacBook Pro, the iMac Pro or the Mac Pro. I’d like to see what comes in those boxes.



I get what you're saying, I'm also looking forward for the even higher performing machines with 12 or 16 core cpus (8 or 12 performance cores + the 4 efficiency cores?), 32gb ram option, 4 thunderbolt lanes, and more powerful gpus. Wondering exactly how far apple can push it, if this is what they can do in essentially a ~20TDP design.

On the other hand it's quite funny that the title of this article is "16-inch MBP with i9 2x slower than M1 MacBook Air in a real world Rust compile" and the comments are still saying "yeah but this is entry level not pro".

Apparently Pros are more concerned about slotting into the right market segment than getting their work done quickly :)


I may be wrong, but the ecosystem does not really change here right? I mean, memory management should be roughly the same between x86_64 and arm regarding the amount of ram used, so I guess 16gb of ram under old macbooks is the same as 16gb under the new ones


All else being equal, yes, but the memory is faster, closer to the chip, has less wiring to go through, and because of vertical integration they can pass by reference instead of copying values internally on the hardware. The last one is big - because all the parts of the SoC trust each other and work together they can share memory withing having to copy data over the bus. That coupled with superfast SSDs means that comparisons aren't Apples to oranges, excuse the pun.

16GB of memory on-die shared by all the components of an SoC is not the same as 16GB made available to separate system components, each of which will attempt to jealously manage their own copy of all data.


I'm not a hardware person, but I do software for a living. Your comment makes things much clearer.

You're saying that the effective difference in having the shared memory is that you get more data passed by reference and not by value at the lower levels?

If that's true, then you get extra throughput by only moving references around instead of shuffling whole blocks of data, and you also gain better resource usage by having the same chunks of allocated memory being shared rather than duplicated across components?


That’s how I understand it, yes. I’m not into hardware either, going by engineering side of the event. In the announcement, there some parts shot at the lab/studio, where the engineers explain the chip. Ignore the marketing people with their unlabelled graphs, the engineers explain it well.

But yes, they’re basically saying because this is “unified memory”, there’s no copying. No RAM copies between systems on the SoC, no copies between RAM and VRAM, etc. because the chips are working together, they put stuff on the RAM in formats they can all understand, and just work off that.


> because of vertical integration they can pass by reference instead of copying values internally on the hardware.

got any links about that?


Going by the engineering explanations in the announcement video. See the segments shot in the “lab” set. They’re actually pretty proud of this and are explaining the optimisations quite candidly.


interesting, thanks


"Controlling the ecosystem" and "integration" and such are just wishful-thinking rationalizations. Chrome and Electron will use however much RAM they use; Apple can't magically reduce it. If you need 32GB you need 32GB.


Slack (probably the most popular Electron app) has confirmed they are going native on Apple Silicon: https://twitter.com/SlackEng/status/1326237727667314688?s=20


This is still Electron, no?


My guess is that it's an ARM build of Electron - unless they've been working to bring the iOS version over? That would be a huge win.

Even if this is Electron, I suspect this still great news for anyone that needs Slack. The Rosetta 2 performance of Electron would likely be a dog and Slack is a very high profile app with a lot of visibility.


Yeah, that’s partly true. Applications that allocate 1000GB will need to get what they ask for. No getting around bad applications. The benefits are more in terms of lower level systems communicating by sharing memory instead of sharing memory by communicating, which is always faster and needs less memory, but needs full trust and integration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: