> You can't get an M1 configuration right now larger than 16GB which is a table-stakes baseline dev requirement today.
Everyone on my team has been using 15" MacBook Pros with 16GB RAM for the past 3 years. I suspect most developers run with 16GB of RAM just fine.
I'm not arguing "16GB is fine for all developers everywhere!", but it's absolutely not a hard requirement. I suspect for a lot of us, the difference in performance between 16GB and 32GB is trivial.
Regardless, the thing which is kind of stunning about this chip is that they are getting this kind of performance out of what is basically their MacBook Air CPU. Follow on CPUs—which will almost certainly support 32GB RAM—will likely be even faster.
> Regardless, the thing which is kind of stunning about this chip is that they are getting this kind of performance out of what is basically their MacBook Air CPU.
Or to put it a different way: this is the slowest Apple Silicon system that will ever exist.
Laptop or desktop: likely, but even if the next Apple Watch will be faster, which I doubt, their smart speakers and headphones probably can do with a slower CPU for the next few years.
Is there a name for this trait of bringing unnecessary precision to a discussion, I wonder?
I mean, contextually it’s obvious that the previous poster meant this is the slowest Apple Silicon that will ever exist in a relevant and comparable use case - i.e. a laptop or desktop. And the clarification that yes, slower Apple Silicon may exist for other use cases didn’t really add value to the discussion.
And I’m not even being snide to you - I’m genuinely interested whether there’s a term for it, because I encounter it a lot - in life, and in work. ‘Nitpicking’ and ‘splitting hairs’ don’t quite fit, I think?
I don't have a name for it, but I agree that it should have a name. It's a fascinating behavior. I nitpick all the time, though I don't actually post the nitpicks unless I really believe it's relevant. Usually I find such comments to be non-productive, as you mention.
And yet, even though I often believe nitpicks to be unnecessary parts of any discussion, I also believe there is a certain value to the kind of thinking that leads one to be nitpicky. A good programmer is often nitpicky, because if they aren't they'll write buggy code. The same for scientists, for whom nitpicking is almost the whole point of the job.
It's just an odd duality where nitpicking is good for certain kinds of work, but fails to be useful in discussions.
Everything I have seen from Apple talks about Apple Silicon as the family of processors intended for the Mac, with M1 as the first member of that family.
I know other people have retroactively applied the term “Apple Silicon” to other Apple-designed processors, but I don’t think I’ve seen anything from Apple that does this. Have you?
I think if you have a very specific role where your workload is constant it makes sense. I am an independent contractor and work across a lot of different projects. Some of my client projects require running a full Rails/Worker/DB/Elasticsearch/Redis stack. Then add in my dev tools, browser windows, music, Slack, etc... it adds up. If I want to run a migration for one client in a stack like that and then want to switch gears to a different project to continue making progress elsewhere I can do that without shutting things down. Running a VM for instance ... I can boot a VM with a dedicated 8GB of ram for itself without compromising the rest of my experience.
That is why I think 16GB is table stakes. It is the absolute minimum anyone in this field should demand in their systems.
Honestly the cost of more RAM is pretty much negligible. If I am buying laptops for a handful of my engineers I am surely going to spend $200x5 or whatever the cost is once to give them all an immediate boost. Cost/benefit is strong for this.
All of this is doable in 16GB, I do it everyday with a 3.5GB Windows 10 VM running and swap disabled. There are many options as well such as closing apps and running in the cloud.
Update: Re-reading your above comment I realized I mis-read your post and though you were suggesting 32GB was table-stakes... which isn't quite right. Likewise much of below is based on that original mis-read.
I'm not convinced that going from 16GB to 32GB is going to be a huge instant performance boost for a lot of developers. If I was given the choice right now between getting one of these machines with 16GB and getting an Intel with 32GB, I'd probably go with the M1 with 16GB. Everything I've seen around them suggests the trade-offs are worth it.
Obviously we have more choices than that though. For most of us, the best choice is just waiting 6-12 months to get the 32GB version of the M? series CPU.
I've seen others suggest that 32GB is table-stakes in their rush to pooh-pooh the M1.
I, personally, am a developer who has gone from 16GB to 32GB just this past summer, and seen no noticeable performance gains—just a bit less worry about closing my dev work down in the evening when I want to spin up a more resource-intensive game.
I agree with this. I don't think I could argue it's table stakes, but having 32GB and being able to run 3 external monitors, Docker, Slack, Factorio, Xcode + Simulator, Photoshop, and everything else I want without -ever- thinking about resource management is really nice. Everything is ALWAYS available and ready for me to switch to.
People have been saying this kind of thing for years, but so far it doesn't really math out.
Having a CPU "in the cloud" is usually more expensive and slower than just using cycles on the CPU which is on your lap. The economics of this hasn't changed much over the past 10 years and I doubt it's going to change any time soon. Ultimately local computers will always have excess capacity because of the normal bursty nature of general purpose computing. It makes more sense to just upscale that local CPU than to rent a secondary CPU which imposes a bunch of network overhead.
There are definitely exceptions for things which require particularly large CPU/ GPU loads or particularly long jobs, but most developers will running local for a long time to come. CPUs like this just make it even more difficult for cloud compute to be make economic sense.
As someone who is using a CPU in the cloud for leisure activities this is spot on. Unless you rent what basically amounts to a desktop you're not going get a GPU and high performance cores from most cloud providers. They will instead give you the bread and butter efficient medium performance cores with a decent amount of RAM and a lot of network performance but inevitable latency. The price tag is pretty hefty. After a few months you could just buy a desktop/laptop system that fits your needs much better.
Larry Ellison proposed a thin client that was basically a dumb computer with a monitor and nic that connected to a powerful server in the mid 1990s.
For a while we had a web browser which was kinda like a dumb client connected to a powerful server. Big tech figured out they could push processing back to the client by pushing JavaScript frameworks and save money. Maybe if arm brings down data center costs by reducing power consumption we will go back to the server.
I would turn that around. What kind of development are you doing where you feel 32GB is "Barely enough"?
Right now I primarily work on a very complex react based app. I've also done Java, Ruby, Elixir, and Python development and my primary machine has never had 32GB.
More RAM is definitely better, but when I hear phrases like "32GB is barely enough", I have to wonder what in the hell people are working on. Even running K8s with multiple VMs at my previous job I didn't run into any kind of hard stops with 16GB of RAM.
One data point: when I was consulting a year ago, I had to run two fully virtualized desktops just to use the client's awful VPNs and enterprise software. Those VMs, plus a regular developer workload, made my 16GB laptop unusable. Upgrading to 32GB fixed it completely.
Desktops can use less memory than folks many folks think. I have a VM of Windows 10 in 3.5 GB running a VPN, Firefox, Java DB app, and ssh/git. For single use, memory could be decreased.
I think the art of reducing the memory footprint has been lost. Whenever I configure a VM for example, I disable/remove all the unused services and telemetry as the first step. This approaches an XP memory footprint.
That's not what this discussion is about. 16GB is definitively limiting if you run VMs but 32GB should be plenty. If you need more then either you are running very specialized applications which means your own preferences are out of touch with the average developer or you are wasting all of the RAM on random crap.
If you do machine learning or simulations with big datasets and lots of parameters it does become an issue, but I will admit I could just as easily run these things on a server. I don’t think I’ve ever maxed out 32gb doing anything normal.
Sounds like folks never want to close an app. It could be a productivity booster if you want to spend the money and electricity, but is rarely a requirement.
Everyone on my team has been using 15" MacBook Pros with 16GB RAM for the past 3 years. I suspect most developers run with 16GB of RAM just fine.
I'm not arguing "16GB is fine for all developers everywhere!", but it's absolutely not a hard requirement. I suspect for a lot of us, the difference in performance between 16GB and 32GB is trivial.
Regardless, the thing which is kind of stunning about this chip is that they are getting this kind of performance out of what is basically their MacBook Air CPU. Follow on CPUs—which will almost certainly support 32GB RAM—will likely be even faster.