Hacker Newsnew | past | comments | ask | show | jobs | submit | beefbroccoli's commentslogin

I've been a Linux user since 1995 and run Windows + WSL2 on my desktop machines. It's not too deep and pretty similar to why so many folks were drawn to Macs; a no brains just works GUI with the ability to launch a terminal and do real work on a UNIX-like system.

I can use a single machine to do everything I need, without rebooting and without making sacrifices.

I can watch Netflix and play games, without needing to write a f'n shell script to fix the screen tearing present in the nvidia driver - or realizing a particular game has quirks or doesn't work in Proton, so I just have to throw up my arms and say "Well, I guess we just don't play that game".

I can pop open a terminal anytime and have access to a real Linux system, as opposed to the faux "uncanny valley linux" solutions like Cygwin and Git Bash that seem to work until they don't. And unlike a traditional VM there's no management involved; I open the terminal when I need it and close it when i'm done, just like a normal application.


If you want to remove a lot of unnecessary pain in your life, completely abandon the idea of mounting a host FS within a guest VM. Doubly so if they're different OS's/FS's and triply so if this a development VM.


Oh, I have.

My setup is now Remote VS Code to a Linux NUC that sits on my desk. All code lives on the NUC and all tests run on the NUC, but I can use my MacBook Pro as I move around the house, work outside, etc.

It's pretty seamless, I'm impressed.


Oh what… by "Remote VS Code" are you referring to this? https://code.visualstudio.com/docs/remote/remote-overview Seems very handy indeed!


Yeah, the VSCode UI runs on my Mac, but it runs a vscode remote server on my NUC. Debugger, tests, etc. all run on my remote machine.

I originally did this in a Linux VM, but got tired of the battery hit that running a VM 24/7 has.


Neato, that's pretty awesome! Can keep your main machine freed up to deal with actual editing and other usage, while the remote server does all the "heavy lifting". Thanks for the info!


I have had a similar setup with a remote server and a MacBook Air. If internet on trains were a bit more stable it would have been the perfect setup.


Thank you for commenting about this! I've been looking for a good solution to this exact kind of thing.


I’ve had some success with Mutagen’s [0] Docker support. Essentially you can set up a tiny Alpine container or similar with a volume mounted to it. Mutagen will then keep your code in sync using rsync, allowing you to mount that volume in other containers and bypass the performance hit.

It’s a bit of a pain to set up, and you lose some of the advantages of using Docker in the first place, but if you absolutely have to use it, it can get you back to full performance.

[0] https://mutagen.io/


The simple answer is that typically the software that keeps one tied to Windows is the type that wants/needs to be on metal.

Hardware passthrough does exist and I only expect it to improve, but we're still a ways from reaching "just works" level.


I did. But I also have Windows machines, in which I use WSL2. Granted, I don't use GUI applications on them (or see much need to).

It's actually quite nice to have a "real" linux instance available on virtually any Windows machine. I believe Microsoft is targeting the "developer who uses a Mac" market share with WSL2 and I expect more and more to jump ship, as the experience is significantly better.


Apple wins in the hardware department by far. Even before M1. If this targets anyone it targets developers who play video games.


I think most developers would be happy with a Thinkpad, if it meant they could use Docker without pulling their hair out.


WSL2 uses it's own init system - only one kernel (plus a tiny initrd) is virtualized by hyper-v; each subsequent linux instance (distribution) is containerized. There's additional facilities that handle resource allocation dynamically vs the user specifying a static amount during VM creation.

These (and more) result in the end-user interacting with WSL2 the same way they would any normal application. To think of it as simply a VM isn't quite correct.


So I fired up Ubuntu first for my current WSL2... then I installed a Kali distribution.. are you saying that the Kali instance is really just a container inside the Ubuntu WSL2?


No. A minimal initrd + kernel is virtualized. Both Ubuntu and Kali are containers.


Ah gotcha. Neat stuff.


Sure but I could just run Ubuntu and containerd in a VM and it's the same thing. Or just run k8s instead...


Not quite. WSL2 dynamically uses resources as needed (`vmmem`), whereas a traditional VM requires allocating a fixed amount of RAM and CPU from the host machine.


Living in Middle of Nowhere, New Mexico I was actually pretty surprised by how well Google Maps has worked. It knows my dirt/gravel roads pretty well.


I've lived in Washington for 10 years and I have found several roads I am familiar with that have changed due to landslides in that time. I would guess at least 10 roads in the state have washouts that change the course of the road every year. In the wilderness they often aren't repaired and become footpaths.


I can only speculate but I'd guess the issue is that the trees and mountains casue a lot of multipathing, theres no wifi stations or cell towers at all to cross-check, and the traffic volume is so low, you can't really fall back to statistical methods. That doesn't even get into the fact that there's often a labrynthe of private roads that all technically have the same name, typically given by rhe nearest large creek or stream. Google seems to have a REALLY hard time differentiating the gated private driveways from the more "arterial" ungated sections of these roads.


little one liners like that are arguably better implemented as aliases


What advantage do you find with aliases? I like that the shell commands are "one file does one thing."


In many cases one might not want a full stand-alone program with arguments, usage and such. Rather a "script" i.e. something that can be simply executed to run a series of tasks. For example it's quite silly for a build script to require the path of the project it resides in.


It's really not silly. A build script could easily move around inside a project. It could go from the project root, to /root/bin, to /root/bin/build, etc etc. If the inconvenience of arguments troubles you, write a wrapper or alias it or something like that.


Yes, it is silly. You've just added extra complexity to a project for little to no benefit. If you decide to move the script you take the same steps as if you moved anything else i.e update the references.


How would you define complexity such that a script which computes a value is less complex than a script which accepts that pre-computed value?


It's unnecessary complexity for the user. As opposed to simply having the build instructions as `Run bin/build` you're suggesting "Run bin/build <project root>". But there's zero reason for requiring the user to supply the project root when we already know it.


This is exactly the kind of implicit coupling that makes software unmaintainable in the long run. You _are not passing the project root_, you are passing the value of a variable which currently should be set to the project root. The constraints on this value are not that it must be the project root, they are other constraints that are satisfied by the project root right now.


You said earlier that "If the inconvenience of arguments troubles you, write a wrapper". That's precisely what a script like this is. A wrapper. I can run one simple, static command, similar to just about any build tool e.g. `cd project && make`.


WSL2 is great if you treat like a linux system i.e. don't do goofy things like check out code on NTFS and work on it from Linux.


I gave up and went back to linux, I had crazy issues with simple things- like not being able to _rename files_ if the wsl for linux git had cloned the file (but it would work if windows git had cloned it). Death by hundreds of little cuts like this for me. Visual studio code worked just as well if not better on a mac or linux.


I really like WSL2, and the vscode integration between 10 and WSL is simply astounding.


I revisit these tools every so often and I still don't understand how they're any better than simple shell scripts.


There's nothing simple about shell scripts. Idempotency is the main thing - and a great thing.


People talk about idempotency, as if it's complicated because it's a 5 syllable word.

  if ! which sometool; then
    install sometool
  fi
That's an idempotent shell script, in all it's simple glory.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: