Hacker News new | past | comments | ask | show | jobs | submit login
Linux Desktop on Apple Silicon/M1 in Practice (gist.github.com)
265 points by aodaki on March 3, 2021 | hide | past | favorite | 143 comments



I like the attitude of author of this gist. Just make it work.

My own practical practice is to use both a very fine System76 Linux laptop and also a super fast, if constrained, M1 MacBook Pro. Depends on what I am doing, or sometimes just what I feel like using.

To be perfectly open though, I had considered switching to just using Linux until the M1 was released. The thing that held me back was spending $3500 on a commercial Common Lisp implementation 11 months ago. Going all in on Linux would have cost me more money.


I use Common Lisp daily on macOS and Linux, but use SBCL and Emacs with Sly. Never tried out a commercial implementation so I cannot compare to Emacs and SBCL. Are they worth the money vs. the open source tools?


I use LispWorks proprietary UI library CAPI, otherwise all of my code is portable.

SBCL and Emacs are great, use them often, also.


How could they be worth it? Vendor lock-in has a huge cost IMO. You're going to have to keep paying for these insanely expensive licenses for years, maybe decades?


If one wants to use Lisp as the Lisp Machines/Interlisp-D allowed for, only the commercial versions offer such experience.


Out of curiosity, which commercial CL implementation would that be? LispWorks? Allegro?


Must be LispWorks. Nothing else is anywhere near as expensive.


Does running M1 somehow save you from having to use Common Lisp, or something?


Sigh. I wish Apple would just release these drivers as a sign of goodwill to the rest of the community. I think these machines could make decent development workstations if they had an M.2 port and native Linux support on day 1. It really feels like the least they can do considering their transgressions against the open source community over the past few years...


This is way beyond wishful thinking. Apple will do what it makes most sense in their business strategy. Due to their past transgressions against the open source community, what makes you think that community fits in their business strategy?


I'm not sure what transgressions you're referring to, but releasing some specs would be nice, or updating the open source copy of XNU to a version with M1 in there.


Already happened.

>Okay, so Apple just went and released M1 support files as binaries in their latest XNU source code dump.

There is no specific license applied to these binary files, therefore the top-level APSL license applies.

https://twitter.com/marcan42/status/1355907966541565957


Ugh, binaries. I wonder what the motivation was to keep those closed.


Some of these have been pain points on QEMU (such as Android emulation) for a really long time. The coreaudio fixes and sparse block storage alone are fantastic.


What's the story with OpenGL on macOS? I know it is abandonware in favor of Metal but are there reasons Apple will at least keep it in the OS / drivers for foreseeable future? Do any of the ANGLE type things help when they finally take it away?


It still works and at least somebody still seems to work on it going by the somewhat random GL-related fixes and regressions in macOS Betas since GL was declared deprecated.

AFAIK ANGLE (and MoltenGL) only provide GLES2.x and GLES3.x over Metal, so out of the box those wouldn't be very useful for providing desktop-GL compatibility.

Apparently Zink on top of MoltenVk on top of Metal works, but that looks like a Jenga tower of emulation layers.

Ideally, Apple would rewrite their OpenGL framework on top of Metal (if it hasn't happened yet), and keep that working at least into the 2030s.


In fact, OpenGL on Apple M1 is implemented through AppleMetalOpenGLRenderer.


Virgil 3D renderer provides some desktop GL emulation, so it is still somewhat usable. But a serious application expecting desktop GL would see glitches.

My recommendation is to use Wayland, and to play serious games on macOS (although such games are not available on Linux AArch64 anyway...)


Virgl3D should support emulating GL on GLES usecases. If you're seeing issues, please open bug reports on https://gitlab.freedesktop.org/virgl/virglrenderer/-/issues and we can try and look into them :)


Virgil 3D does good enough for my workload. I saw some problems in GLES backend support but they are minor and I have already written patches for them.

However, as feature_list in src/vrend_renderer.c shows, some features are unavailable and cannot be emulated. It is good enough but not perfect (and I think it is reasonable.)


>> AFAIK ANGLE (and MoltenGL) only provide GLES2.x and GLES3.x over Metal, so out of the box those wouldn't be very useful for providing desktop-GL compatibility.

I thought Wayland implementations considered one of those a base requirement.


Hi! Virgl3D developer here. Could you please open a MR with your patches so we can look into integrating them into master?


Thank you for your work & reply. I already submitted some and a few were merged and one is in review.

The other patches are not submitted yet because they require a little change for Epoxy, which is not merged yet, or I'm being lazy. Maybe I should open a MR as a RFC.


I imagine memory will be quite constrained here for a lot of M1 systems.


The M1x benchmarks that recently leaked show that it supports 32 Gigs of RAM, has twice the number of big CPU cores, and twice the GPU cores.

https://www.tomsguide.com/news/macbook-pro-m1x-benchmarks-ju...

It should be adequate for a non-entry level laptop.

One of the fairly accurate Apple leakers, Jon Prosser, has said that one of his proven sources has informed him that the leak is legit.


How does the current M1's graphics performance compare to, say, a mid-range NVIDIA laptop GPU?


It is 30% to 50% slower than a 1050ti according to https://ikarios.wordpress.com/2021/01/09/comparing-a-macbook...

That dovetails with my own experiences. I installed tomb raider on my m1 air, and while it was playable i was not impressed with its graphical abilities. The m1 is great for integrated graphics, but not on par with dedicated graphics. CPU-wise it is the fastest computer I have though, noticeably faster than the i5 9600K in my desktop.


Thanks - so it sounds like a 100% speedup would put it into "decent" territory


The Apples to Apples comparison (if you will) is to other integrated GPUs.

>The first Apple-built GPU for a Mac is significantly faster than any integrated GPU we’ve been able to get our hands on, and will no doubt set a new high bar for GPU performance in a laptop. Based on Apple’s own die shots, it’s clear that they spent a sizable portion of the M1’s die on the GPU and associated hardware, and the payoff is a GPU that can rival even low-end discrete GPUs.

https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


A bit slower than GTX 1050 mobile


> I imagine memory will be quite constrained here for a lot of M1 systems.

No worries, the storage is fast enough to stand-in for RAM via swap! You always wanted your RAM to be a wear item, right?


Definitely, especially when it is soldered to the motherboard or into the chip itself. /s


Computers increase productivity, so E-Waste is no big deal. /s


zswap helps a lot. One can get ~30% compression rates with a not-too-big performance impact.


Indeed, it is. 16GB is too constrained even for native Linux desktop environment.


What the hell, enough with this! My main dev machine until a few months ago was a Dell laptop with 8 GB of RAM on Ubuntu. I ran RubyMine on it all day, some Sublime, Spotify, Slack, MariaDB, Thunderbird, LibreOffice, and I had no problems whatsoever. The only reason it ever ran out of RAM was a memory leak in Firefox.


Things really go haywire when you are developing C++ with clangd (the de-facto code analyzer for Visual Studio Code, CLion, etc...). It sometimes takes over 8 gigs of ram for just the editor. Also it takes a ridiculous amount of memory to just compile your code, especially if you're using Eigen.


Indeed, as mentioned before, I used to "surf the web" in the 90s with 8MB of RAM. :-)

8GB is plenty for normal work, my current 16GB is plenty for development work including a 4GB Windows 10 VM running on top of Linux.


That's good for you and I'm glad you're happy but for some of us, considering the work with are doing, sometimes even 32GB is not really enough. Different people, different needs.


That's fine, but people like these are mostly outliers in the dev community, they exist on Linux, Windows and even macOS and they definitely KNOW perfectly well what their requirements are and why.

The post I was replying to said something very different and much weirder.


What do you need 32GB for? Video editing? ML training on CPU? I have to imagine this is a minority.


If someone needs 32GB then they need 32GB.

The M1 is Apple’s low-end chip, currently available only on their cheapest computers. It’s like complaining that a Corolla can’t keep up with a Ferrari on a race track. The most charitable interpretation is that the person doesn’t understand the market position of the two items.


I’m just asking what would be a use case where you would actually need this


I find it easy to sneak up on 16gb of memory when running basic apps (editor, bunch of browser tabs, zoom, spotify) plus a complete environment for a mid sized web app (multiple services, dbs, message queue, caches, js bundler, workers etc), especially if I’m running tests and have mobile device VMs up. Pretty cheap insurance over the life of a machine to get 32gb of RAM and never have to think about it.


Right, but that’s what swap is for. It doesn’t necessarily mean that you need more than 16GB.


Perhaps you're correct that I don't need it in the sense that I only very occasionally find myself with a dataset or something that doesn't fit in memory, and could just run a high memory cloud instance for a bit to process it.

Even still, my experience of desktop Linux under memory pressure has been frustrating, even with fast SSDs, and overspeccing the memory is an inexpensive guarantee that my system will never start thrashing during an important screenshare demo or something, so it's an obvious choice if I'm shopping for a new computer.

When I built this computer last year the cost difference between 16 and 32GB was like $40...easy to justify a 2-3% premium on the overall cost of the machine to never have to give a second thought to conserving memory. That said, Apple charges $400 for the same upgrade (in their machines that support 32GB), so the calculus there is a bit different.


Desktop Linux tends to be a bit of a memory hog compared to MacOS. I think a lot of Linux users would be surprised how usable even the 8GB macs are for most tasks.


My work desktop is 32GB and it falls over any time I try to create really big R data sets for others. I have to use a cloud machine with 64GB, and I run that out of memory most of the time when trying to optimize the production pipeline. They refuse to give me anything larger, so that's my upper limit. If anyone knows how to create giant .rds files without storing everything in memory first, I'd love to hear it.


Virtual Machines.

Or a web browser with a lot of tabs.


I never close Chrome tabs.


That's fine, it's not like it keeps them all open, i've had a couple hundred tabs open in Firefox while researching a project using tree style tab to organize them. Modern browsers cache pages to disk after a certain high water mark. People actually seem to think all those tabs stay in memory.


It was meant slightly in jest; but looking at Activity Monitor on my 32GB RAM Macbook Pro, it looks like I'm currently using ~28GB. I have Docker & a few Node.js processes (webpack builds, typescript compiler, language server, etc) taking about ~10GB between them, and then a sea of "Google Chrome Helper (Renderer)" processes each taking between 100 and 900MB. There are at least 20 of these, and then also the usual suspects with Slack, Skype (yes), Finder, etc.

Honestly, I could probably do with 16GB right now, but I'm planning on keeping this machine for at least 5 years; it was worth the few hundred bucks extra to future-proof it.


Browsers will quickly eat up all of your available RAM if you open enough tabs. The thing is, if you had less RAM, they'd be keeping fewer tabs alive in RAM. So you can't really infer from "I'm using X amount of RAM now" to "I need at least X amount of RAM". If you upgraded to 64GB you'd probably end up 'using' a lot more than 28GB for the exact same workflow.


Simply open a blank page, quit the browser and restart it. Now, only load the two or three page/sites you really need. Simple as that to bring mem use under 500MB, with Firefox at least. Repeat this once a day.

I personally close my browser at night and load it in the morning.


Oh yeah, definitely! I was just RAM-constrained on my previous machine (8GB) when running Docker workloads, so I made sure to future-proof this one.


re. the "usual suspects", I got an M1 air late last year, and I've decided to keep it completely separate from my work laptop, so all it has installed is basically firefox, a couple of code editors, and whatever came with it. It absolutely screams compared to my other laptop, but I wonder how much of that is the M1 processor, and how much of it is because I don't have all this garbage running all the time.


Apple brands one of these machines as a "Pro" system and previously offered 32Gb in that model's option range.

The person isn't the one that's failing to understand the market position of the two items, Apple is the one that failed to brand it appropriately. It should have been a Macbook Air & Macbook, not a Macbook Pro.

Although realistically at this point "Pro" has lost nearly all meaning in Apple's lineup. It's like an R badge on a car. Used to mean something specific, now just means a generically higher premium option.


> Apple brands one of these machines as a "Pro" system and previously offered 32Gb in that model's option range.

They still do. The Intel, 4-port, 13” MacBook Pro is still available, and can be configured with 32GB of RAM. I don’t think it would be a sensible purchase at this point though.

That 2-port Pro has no reason to exist IMO. Even on Intel it used a chip with a TDP closer to the Air than the 4-port models; now on ARM they’re using the exact same part. Yeah it has a fan but most workloads will never turn the thing on.


>> What do you need 32GB for?

Massive projects. Or a couple VM's.

The first laptop I saw running a VM was a Dell Precision M50 (?), it was 2" thick and almost 10lbs. It had something ridiculous (at the the time) like 1 or 1.5GB of RAM.

A sales guy was demoing a product to us, and it spun up a Windows 2000 VM for IIS and another for SQL server, it probably took 10 minutes to get started but he could then demo the app through the browser. Sounds silly but it worked.

"But can't you just run it on a server somewhere"

Yes. That'd be cheaper and faster. But at this high end of the market, it might make more sense for some people.


I work on a medium-to-large sized project that has a Go backend and a React/TypeScript frontend. Having our full development environment running (a handful of Go processes, Webpack, PostgreSQL, Redis, an indexing server, and probably other stuff I'm forgetting) and trying to edit code in both simultaneously (so having LSP servers running for both Go and TypeScript) is usually enough to cause my 16 GB laptop to swap heavily. It's not a pleasant experience: think multi-second UI lockups, music skipping, that kind of thing.

So, realistically, at this point I need a 32 GB machine to effectively do my job. Minority? Sure. But I think there are enough people in my boat that it's a legitimate market to have an option for.


Not the OP, but I fill 32GB pretty decently when I run my work’s customer facing web stack, database server, ancillary services, and a couple search engine instances. Prior to moving to locally hosted containers my main memory gripe was Electron. I much prefer having all of my development localized though. It’s a lot closer to the good parts of when I wrote “shrink wrapped” desktop software.


Run and compile very big C++/.net applications and big scenes in Unreal Engine and Houdini.

edit: I love how I got downvoted because my work can't fit into a toy Mac M1


Running a Filecoin full node. Mind you, not mining, just verifying.

https://docs.filecoin.io/get-started/lotus/installation/#min...


Somehow if you have that kind of memory it tends to fill up.


Too many electron apps? :P


Boy isn't that the truth. I actively avoid them but sometimes you just can't like with signal-desktop. I actually have to keep it running because I don't want to reach for my phone every 10 minutes. Something as simple as that interface doesn't need electron but oh well. Maybe a lite version with just text and contacts would be acceptable. I don't need stickers, gifs and emojis.


Is it? Up until now most devs still go with 16 or even 8 GB laptops. 32GB laptops are far between (Macs or not), and much fewer people use desktops...

And we used to run Vmware with Windows (on Linux) or Linux (on Windows) in 2005 just fine, on laptops with a couple GB at best...

Dozens of devs I know in a company I work with have 8GB Macs (and 2-3 years old at that), and run Vagrant and Docker with no issue. And those are mostly Airs with shared GPU too...

I think people with 32GB laptops hugely overestimate how many other devs also are on 32GB...


I have a 2019 MBP with 32GB of RAM. I bought it thinking that I would be running a Windows VM under Parallels, on which I would then let my company install the execrable Microsoft "mobile iron" access control. (You know, the one where they can wipe the machine remotely.) But then they changed so that even machines with this enabled cannot access the company resources on Azure. It must be one of their own machines. (So what's the freaking point?) Anyway, if it would have worked out, that's definitely a use case for more than 16.


> no issue.

Could this be because they don't understand how much faster it can be -- the slowness is an accepted cost of doing work?

I've recently run virtualbox/vagrant/docker on an air from ~3 years ago, and it's painfully slow compared to my 3-yo system with 32GB RAM. It works perfectly fine, but it's slow.

> I think people with 32GB laptops

That may be true. It may also be true that people with 8GB laptops vastly underestimate the benefit of having more memory available for development workloads.


Linux desktop environment runs fine with 4GB.


I think the people who claim that 16 GB isn't enough are insane, but it should be said: at 4 GB it does become a bit problematic to both keep browsers with a bunch of tabs open and do other things smoothly. If we keep modern website hogs out of the picture, I completely agree with you.

I tally about 500-1000 MB doing my job – if I can close the browsers that I do admittedly need to read documentation.


2GB here, Chromium under OpenBSD with cwm(1).

my ~/.config/chromium-flags.conf file:

        --ignore-gpu-blocklist
        --enable-gpu-rasterization
        #--enable-zero-copy
        --disable-sync-preferences
        --disable-reading-from-canvas
        --disable-smooth-scrolling
        --wm-window-animations-disabled
        --enable-low-end-device-mode


> I think the people who claim that 16 GB isn't enough are insane

The problem is containers. Some development workflows appear to require running a container for every piece of software needed, and that adds up (and feels incredibly wasteful).

None of my development workflow requires containers locally, so I get by just fine with 16gb.


I wonder when the pendulum will swing back and people will stop this insanity. I can already picture the buzzwords people will invent to describe "apps running on the OS itself".


On the other hand it's no insanity to run millions of lines of code in a computer, connected to the internet and hopefully sharing all the projects you need to work with and dependencies you need to make them work, with full access to your filesystem, usually requiring administrator rights to be installed or even used.

This would all be needless if there was an OS that allowed you to switch to another "env" wiping the RAM in the process (like, storing a snapshot of it for when you switch back), with guaranteed isolation at the filesystem level (perhaps able to still read-only link into common libs), able to install things without touching other "envs". If you're working in any webdev related environment, that is itself the definition of insanity.

It's like buying a hammer from the carpenter at the end of the road and giving access to all your house to him in the process, including your wife. Everything becomes a nail.


Maybe I'm just a simplistic person, but classical Linux distributions make this problem non-existing for 99% of what goes on in my world.

Their role is precisely to orchestrate the cooperation and interdepndency of those millions of lines of code. I don't understand why people have started turning those distros into glorified delivery vehicles for containers.


Simplistic is actually good - but I don't think that any (?) of the available current OSes is able to contain a program that can run once as administrator right? If I'm not recalling it incorrectly, BSD jails, linux cgroups, user filesystem permission, docker, vm's - and that's at the surface.


"A bare-OS application" sounds almost as cool as "bare-metal".


That would require people to clean up. Sometimes I squint at the installation requirements and then head over to hub.docker.com. I think it will get worse (i.e. more layers on top) before it gets better.


Yeah when you see "install this docker" OR "follow this 10,000 line install and compile guide" and then have to explain to your boss what you've been up to today.


Yeah, containers or VM's up the count indeed, but still. 2 Parallels (test version for m1) VMs running server ubuntu, 1 with 2 ssh connections, for X and tunnelling, running emacs into my desktop, running a web server with repl and environment, other running Postgresql. Safari with 8 tabs open, 4 terminal windows, Xquartz, tells me the biggest memory footprint is the vm with emacs/server/repl, at 3.9GB followed by the other VM at 2.8GB, but the overall used mem says 6GB used(?).

From the tabs Gmail was pointing to +1GB - a browser email client consuming +1/3 of the memory a VM running an OS plus a database.

I sometimes have another VM, another 2 ssh conns, more tabs open including youtube and never noticed any slow/sluggish behaviour.

I also feel like running VMs is wasteful, but it seems we are not able to create OSes with proper native tools for isolation.


Why would containers do anything to memory requirements? I guess you can't share libraries, but otherwise I don't see why memory use should be any different running the same process in a container or not.


> The problem is containers.

Even this makes little sense unless your containers are really poorly sized. I used to run a mini Kubernetes cluster on my desktop with 6 nodes. Some of them running big Java projects, Postgres, Elastisearch, and Ruby. And I still had enough memory to run Rubymine or IntelliJ along with Slack and whatever other local stuff I needed.


tab discard extensions go a long way toward keeping browsers under control, just pin the tabs that you absolutely can't wait 3 seconds for them to reload or because you need notifications from them.


Linux desktop environment runs fine with 2GB


Can confirm, my VMs default to 4GB of memory and run just fine.


Yeah but throw in firefox or chrome and some electron apps and your processor will be crying and your memory starting to overflow.


Good luck compiling large sources on 4gb


Since when compiling large sources is a desktop environment?

There are projects that you cannot link if you have under 32 GB; but that is not enough to say that desktop environment you use while compiling is unusable under 32 GB.


2GB of RAM user here, done that with clang as it uses far less memory than GCC.


I mean really how many people are recompiling their entire code base every single day? Is their make/build environment so bad that they can't trust incremental compiles? I'm getting along just fine on my 8GB laptop. I know some people work with big datasets and video and such but holy cow. Most of us are probably just writing code for a browser/backend/library/test and not pushing around terabytes of data.


Maybe these people never heard of ccache.


Running Fedora/GNOME on an i5/8gb laptop, never ever felt memory constrained.


My 5-year-old laptop is an i5/8GB with Fedora/GNOME (my primary machine is a desktop) and that is usually ok, but there are certainly things I can't do (like use Google Earth while having slack or a web browser open).


You won't but I think some HN people have to push around a lot data and the fastest way to do that is to get as much of it in memory as possible. Others love their electron apps and docker suites and VMs.


For a regular user, probably not. Contemporary dev work with spurious containerization on the other hand…

Maybe some companies should give their employees two systems. Your average way too expensive bragging rights laptop, and a smaller older one which is the only one you're allowed to do your code review/productive commits on.


How about an undersized laptop for testing and coding. But a personal server for under their desk with k8s already installed.


People, for some kind of reason, really like their laptops. They also have more brand recognition, thus the "bragging right" part, when you get your new X1/Macbook every 2 years.

And if you still work on your laptop but have your personal server box as a CPU slave, that still leaves some issues. You might be able to do your build steps on the machine, but your IDE still runs on your laptop, so that needs to be beefy enough -- never mind auxiliary applications like browsers, Photoshop, CAD etc. Also, it's quite hard to get a slow laptop with decent ergonomics, unless you buy used (not that much an option for companies).

Also, once you go headless server, it's better to just put it online anyway, as you don't need 24/7 access


I mean, Android Studio eats memory too much.


I run KDE with other JetBrain's IDEs and a 2 GB VM in the backround every day and the maximum memory usage was 14 GB.


Are you joking?! That depends entirely on what you're doing! I could do my day to day work on 16 GB just fine. I could probably live fine with 8 GB if I actively closed web pages that I didn't actually need to keep open.


I suppose it's what you do with it. I use about 6GB of an 8GB system, with the rest allocated to buffer cache. Firefox, VS Code, terminals, mapping software...I can't use 16GB unless I spin up a VM.


this is just nonsense


Awesome achievement, but like most OpenGL bridge techniques, 15fps with artifacts is not usable.


The demos run from the terminal seem to run at a much higher frame rate. Firefox webgl demo slowdown may be due to other factors.


Right. Frame rate heavily depends on workload.

For me, it is completely fine. I just develop some softwares with little graphics load, and use Web applications and watch some videos.

You have no luck here if you are going to play modern games, but modern proprietary game engines do not support AArch64 Linux anyway. It would be difficult time for Linux desktop if other PC vendors migrate to AArch64.


It’s still a really large discrepancy. I would guess something is broken somewhere. What is the webgl frame rate on native Mac? Have you tried running those webgl demos with chromium?


You can see native Mac demo at: https://www.youtube.com/watch?v=ezvQPREjN1s

Note that I haven't really optimized it yet. Certainly there should be low-hanging fruits, but I don't dare to get them as it is good enough for me as GNOME is no longer laggy and I know some overheads cannot be eliminated.

If I have to do graphics-intensive workload, I would simply run it on macOS or buy another accelerator as M1 is not the best performant graphics accelerator in the market anyway.


15fps on the browser? Just fine. Who cares for WebGL performance on the browser for dev work unless he's in that racket?

He gets 700fps and close to OpenGL benchmarks he runs...


This is amazing, and I'm not sure if I will end up using the M1 port of Linux if QEMU is good enough.


I loving see people beat problems into submission.


Would any of this work be useful to run Linux on an iPad?


The A12x and A12z don’t have hypervisor support. So the hvf accelerator isn’t going to work.


Basic ARM can run VMs under paravirtualization, ala https://community.arm.com/developer/ip-products/processors/b...


Maybe, but it still requires more efforts.


Ubuntu Desktop (arm64) works pretty well under Parallels in MacOS on M1 (hypervisor).


i agree but 'pretty' is the operative word. Sometimes the network gets suuuper slow inside the vm for me and i still have no idea why.


It could be related to how many cpu's you allocate for the vm. I use 1 single cpu, and networking is extremely fast, very low ping times. (mainly usr the headless server version, no x11)


Can’t you just run X11 on the Mac and Linux in a vm?


You could but then you would still be in the MacOS desktop environment. I think he doesn’t just want to run Linux applications but instead wants the Linux desktop environment.


Desktop environments on Linux basically just draw into a full-screen X window, don't they? (And then the programs you run are children of that root window.) I wonder how hard it would be to hack up XQuartz to support that...


You wouldn't even have to hack it up. There's an option in XQuartz to make it run full-screen, at which point you can run a standard X11 DE.

The main issue with XQuartz nowadays is that it doesn't support Retina displays in windowed mode. Everything runs pixel-doubled by default.


I haven't tried XQuartz at all because I didn't think that kind of software has plenty of resource and is well maintained. Retina support is just one problem caused by lack of maintenance.

There are only two options: hack Virgil 3D or hack XQuartz. You cannot "just" run XQuartz and solve problems. And I choose hacking Virgil 3D because it should have less communication overhead and work with Wayland.


Xquartz doesn't have hidpi support (or if it has, please tell me how to enable it!)


It's a total PITA, but it can be done as long as you are ok with full screen. I use it to run PixInsight on FreeBSD and view it on macOS: https://xw.is/wiki/HiDPI_XQuartz

/edit: oh, and yeah, if you patch xrandr not to return, I suspect that the resolution change will persist on switching between macOS and the unix desktop, though I haven't tested that yet.


Why is this such a big issue? Can't you install any ARM Linux distro from a flash drive? Does that not work? I get that device drivers may not be perfect for any new laptop, but it should mostly work.


Linus Torvalds: ARM has a lot to learn from the PC :

"I think ARM is a very promising platform," he said. "At the same time, the ARM community has never had the notion of a standard platform. ARM never had the PC."

https://www.networkworld.com/article/2220438/linus-torvalds-...


Nothing else ever had the PC. And IBM wasn't too happy that it happened to them, either.


It definitely makes Apple's decision to never allow generics seem wise in retrospect.


But they did at one point in the late 90’s. I wouldn’t say that was the only reason why they nearly went bankrupt, but it certainly didn’t help.


Yeah, "never" was the wrong word. I forgot about that.


In a way, they themselves started becoming a manufacturer of "IBM compatibles", they just locked their OS to them.

A Jobs-less alternate Apple universe might've been interesting though. He both canned the licensed Apple clones and made OpenStep into the next major MacOS revision. If Copland would've been OS 8 or BeOS as OS X, there could be viable clones, and thus maybe enough critical mass to keep PowerPC alive a bit longer.


We did actually but Acorn canned it https://en.wikipedia.org/wiki/Phoebe_(computer)


That's not what he means by PC. He means an open standard that anyone can build against with comparability between products.


That’s exactly what it was. Two other vendors were going to launch as well.


The Phoebe's launch OS (RISC OS 4) was codenamed "Ursula". Unfortunately, shortly into its development, the Phoebe went on an (indefinite) break.


In 1998, it would be an excellent opportunity to use Linux. At the time, I was playing with Apple's MkLinux on PowerMacs.


Very interesting, the reason for cancelling seems to largely hold true today.


I'm curious, is there any reason that ARM doesn't have a standard platform? Or they are working on that?


It does, and it's called Arm SystemReady.

UEFI + ACPI is required on Arm if you want to boot Windows or RHEL/CentOS, which do not even try to boot on non-standardised 64-bit Arm systems.


Same for SLES/openSUSE, except for some platforms (RPi) where they use u-boot to provide EFI.


They are different vendors, working on different products...


There’s a lot of proprietary hardware inside the MacBook M1s. The only reason device drivers exist is thanks to reverse engineering efforts. For this reason, no you can’t just install any vanilla ARM distro and expect things to work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: