I've been a Linux user since 1995 and run Windows + WSL2 on my desktop machines. It's not too deep and pretty similar to why so many folks were drawn to Macs; a no brains just works GUI with the ability to launch a terminal and do real work on a UNIX-like system.
I can use a single machine to do everything I need, without rebooting and without making sacrifices.
I can watch Netflix and play games, without needing to write a f'n shell script to fix the screen tearing present in the nvidia driver - or realizing a particular game has quirks or doesn't work in Proton, so I just have to throw up my arms and say "Well, I guess we just don't play that game".
I can pop open a terminal anytime and have access to a real Linux system, as opposed to the faux "uncanny valley linux" solutions like Cygwin and Git Bash that seem to work until they don't. And unlike a traditional VM there's no management involved; I open the terminal when I need it and close it when i'm done, just like a normal application.
If you want to remove a lot of unnecessary pain in your life, completely abandon the idea of mounting a host FS within a guest VM. Doubly so if they're different OS's/FS's and triply so if this a development VM.
My setup is now Remote VS Code to a Linux NUC that sits on my desk. All code lives on the NUC and all tests run on the NUC, but I can use my MacBook Pro as I move around the house, work outside, etc.
Neato, that's pretty awesome! Can keep your main machine freed up to deal with actual editing and other usage, while the remote server does all the "heavy lifting". Thanks for the info!
I’ve had some success with Mutagen’s [0] Docker support.
Essentially you can set up a tiny Alpine container or similar with a volume mounted to it. Mutagen will then keep your code in sync using rsync, allowing you to mount that volume in other containers and bypass the performance hit.
It’s a bit of a pain to set up, and you lose some of the advantages of using Docker in the first place, but if you absolutely have to use it, it can get you back to full performance.
I did. But I also have Windows machines, in which I use WSL2. Granted, I don't use GUI applications on them (or see much need to).
It's actually quite nice to have a "real" linux instance available on virtually any Windows machine. I believe Microsoft is targeting the "developer who uses a Mac" market share with WSL2 and I expect more and more to jump ship, as the experience is significantly better.
WSL2 uses it's own init system - only one kernel (plus a tiny initrd) is virtualized by hyper-v; each subsequent linux instance (distribution) is containerized. There's additional facilities that handle resource allocation dynamically vs the user specifying a static amount during VM creation.
These (and more) result in the end-user interacting with WSL2 the same way they would any normal application. To think of it as simply a VM isn't quite correct.
So I fired up Ubuntu first for my current WSL2... then I installed a Kali distribution.. are you saying that the Kali instance is really just a container inside the Ubuntu WSL2?
Not quite. WSL2 dynamically uses resources as needed (`vmmem`), whereas a traditional VM requires allocating a fixed amount of RAM and CPU from the host machine.
I've lived in Washington for 10 years and I have found several roads I am familiar with that have changed due to landslides in that time. I would guess at least 10 roads in the state have washouts that change the course of the road every year. In the wilderness they often aren't repaired and become footpaths.
I can only speculate but I'd guess the issue is that the trees and mountains casue a lot of multipathing, theres no wifi stations or cell towers at all to cross-check, and the traffic volume is so low, you can't really fall back to statistical methods. That doesn't even get into the fact that there's often a labrynthe of private roads that all technically have the same name, typically given by rhe nearest large creek or stream. Google seems to have a REALLY hard time differentiating the gated private driveways from the more "arterial" ungated sections of these roads.
In many cases one might not want a full stand-alone program with arguments, usage and such. Rather a "script" i.e. something that can be simply executed to run a series of tasks. For example it's quite silly for a build script to require the path of the project it resides in.
It's really not silly. A build script could easily move around inside a project. It could go from the project root, to /root/bin, to /root/bin/build, etc etc. If the inconvenience of arguments troubles you, write a wrapper or alias it or something like that.
Yes, it is silly. You've just added extra complexity to a project for little to no benefit. If you decide to move the script you take the same steps as if you moved anything else i.e update the references.
It's unnecessary complexity for the user. As opposed to simply having the build instructions as `Run bin/build` you're suggesting "Run bin/build <project root>". But there's zero reason for requiring the user to supply the project root when we already know it.
This is exactly the kind of implicit coupling that makes software unmaintainable in the long run. You _are not passing the project root_, you are passing the value of a variable which currently should be set to the project root. The constraints on this value are not that it must be the project root, they are other constraints that are satisfied by the project root right now.
You said earlier that "If the inconvenience of arguments troubles you, write a wrapper". That's precisely what a script like this is. A wrapper. I can run one simple, static command, similar to just about any build tool e.g. `cd project && make`.
I gave up and went back to linux, I had crazy issues with simple things- like not being able to _rename files_ if the wsl for linux git had cloned the file (but it would work if windows git had cloned it). Death by hundreds of little cuts like this for me. Visual studio code worked just as well if not better on a mac or linux.
I can use a single machine to do everything I need, without rebooting and without making sacrifices.
I can watch Netflix and play games, without needing to write a f'n shell script to fix the screen tearing present in the nvidia driver - or realizing a particular game has quirks or doesn't work in Proton, so I just have to throw up my arms and say "Well, I guess we just don't play that game".
I can pop open a terminal anytime and have access to a real Linux system, as opposed to the faux "uncanny valley linux" solutions like Cygwin and Git Bash that seem to work until they don't. And unlike a traditional VM there's no management involved; I open the terminal when I need it and close it when i'm done, just like a normal application.