As it happens, the Luke-Yoda phraseology like "Do or do not. There is no try" seems to be drawn by George Lucas from the popular late 1960s era books by Carlos Castaneda about a mentorship (~7 mil copies sold). That in turn seems to be about the apprenticeship that Castaneda had with his UCLA PhD advisor Harold Garfinkel:
This looks cool, and the script actually seems pretty small, but I am not entirely sure I understand what it does.
My naive understanding skimming through the code:
1- It mounts the entire (?) filesystem in what I will call "overlayfs mount points" (doesn't it need root access for that?)
2- It executes the command on that "sandbox", which looks exactly like the rootfs, but is actually an overlay
3- Once the command returns, it knows what has been changed in the overlay, and shows it to the user. The user can then "commit" those changes (in which case I assume the command writes them from the overlayfs to the actual rootfs)
Everything that does not happen on the filesystem (e.g. a network call) won't be "tried", it will just happen.
yes; the mount doesn't need root access because of user namespaces (which is one of the linux features with the most vulnerabilities next to BPF, but it's also quite handy...)
the sandbox is just a directory with the overlayfs content (whole new files for modified files, whiteouts for removals); there's some bugs e.g. removing a directory will create a whiteout that the apply script will try to rm without -r, and there's a handful of other failure modes I can think of (really doing network etc), but for simple commands it's a nice idea.
It seems that the focus is on sandboxing file system access. This is great if you already know that “stateful disk access” is the only thing the command does!
It would be interesting to think about how/whether to extend this to stateful network access. What would happen if I used try on a command that does something like create a GitHub pull request? Right now I believe the pull request gets created regardless of whether I commit the file system changes at the end. Some alternatives:
1. Block all network access, because the tool can’t reliably reason about what could be happening.
2. Block all network access, except accesses that look like HTTP GET requests (and maybe HEAD/OPTIONS requests).
3. Attempt to build some sort of request replay mechanism, so that if a command’s results aren’t committed, but the command is ~immediately re-run with slightly different options, HTTP responses for requests identical to those from the previous attempt are re-used without making the request a second time.
Obviously all three have downsides in terms of how useful/complex/brittle they make the command. But maybe worth pondering at the least.
Indeed. A cardinal sin in software (in general, not just CLIs) is skipping or half-assing documentation of side effects. Combined with multi-step operations, partially committed operations can be nasty to debug, and even reason about.
This tool can bring a level of control that is typically only available in databases and VCSs, and make it a commodity.
Pretty please, put a real description in first. This should be the first thing I read:
"try lets you run a command and inspect its effects before changing your live system."
And put snarky one-liners somewhere later. I only noticed what this project actually is after HN had changed the title of the post. And I had even opened Github page before as I usually do.
There are two very good places, at the top in project description and top of readme. I know the temptation to get noticed using a meme, but let me judge the meme after I've made the decision if this is something I would want to use my precious time for. Thanks.
he is not running in a container - he is not running the command in isolation from the system, he is running on the current system. The script is mounting an overlay file system for each root directory on the current system, then looking at what has been added as a new layer - that's what the command is supposed to add to the running system.
Looks nifty— basically it's the "let me try that in a container first" except on your live system with no setup to get it going.
That said, as a NixOS user for the last year or so, I think I've gotten a bit spoiled by just not worrying about this kind of thing any more— eliminating the filesystem (other than home and var) as a big ol' chunk of globally mutable state is so liberating.
I feel that one day I should write about this curse that NixOS brings into your life when you start enjoying it : you cannot go back to different systems but at the same time you (at least I) cannot vouch and recommend it to others as the languages and constructs (Flakes with a space or an utf-8 character in the path ? Here is a rabbit hole you can go down with) are just so byzantine and painful to work with but oh boy do they work... A crystal prison, nice but with sharp corners everywhere...
Why you don’t recommend it to anyone? May be
a proper recommendation is what I need to really
be engaged with it ? The same for Nix.
Please more details about the practical benefits
because I kind of see in general why it can be good
but there is still certain lack of the real practical demonstrations from those who use it daily.
I don't recommend emacs because the vast majority of packages have "Lisp Incompletion Syndrome"--they get the easy 80% right and leave you to get bitten by the difficult 20%.
lsp-mode and tramp still have bad Heisenbug interactions even after you get the correct incantations to make them not crash. Other packages are similar.
There are a few very core packages that work well. Everything else is in sufficient disrepair that you will have to pick the broken pieces up off the floor at fairly regular intervals.
Try out Doom! You don't have to use evil-mode either if that's not your thing (I don't use it), just disable :editor evil in your init.el.
Personally I kind of view it like having a custom mechanical keyboard. Why not invest some time and money into making your tools more ergonomic and enjoyable? Yeah any keyboard will work, and any text editor will edit documents.
Text-editing aside, magit and org-mode are particularly nice in Emacs. Plus there's just something comforting knowing that Emacs will always be there for me, just the way I set it up.
> Why not invest some time and money into making your tools more ergonomic and enjoyable?
I did that for many years. After switching from one machine to the next, one operating system to the next, one IDE to the next, everything constantly changing, year after year - I found myself in a job where I had to reinstall the OS and everything on it from scratch, every two weeks, for a year, because... well. Because! By the time that was over, I had given up customizing much of anything at all, and that has been working out all right ever since.
That would not have helped much with the jobs where I needed to use some proprietary IDE, or which involved some OS on which Emacs was poorly supported.
(If I had already been an Emacs fan, I suppose I could have found some way to forcibly bodge things together and use my preferred editor regardless: but I'm afraid it's never appealed to me.)
Tangible (e.g. file- or even better text-file-based) configuration helps here—this is less a fault of customization in general and more of opaque configuration systems.
> Why not invest some time and money into making your tools more ergonomic and enjoyable?
Because unless you use just one system daily or even weekly, customizations are nothing an annoyance since it’s unlikely you can clone every customization across every system you use daily.
> since it’s unlikely you can clone every customization across every system you use daily.
But you can, even for physically distinct machines: just package-up your emacs/environment/shell/etc profile into a bash-bunny USB stick, such that the bunny uses its keyboard emulation to type-out and run the commands that load your profile into your current machine.
Right, that's why I said to use a Bash Bunny: it's a USB mass storage stick that can also emulate a USB keyboard (there's a few buttons on the stick to switch modes), so you'd be at the computer, open up a terminal and open the editor for a new bash/emacs config file, then plug-in the stick in keyboard mode and press the start button on the stick and after a few seconds it should have dumped kilobytes of data into the file which you can then save locally and so take your bash/shell/emacs/etc settings with you, even without needing USB mass-storage support (given many companies disable USB mass storage to mitigate data-exfiltration but, of course, need to allow USB mice and keyboards).
That's why storing your conf in a keyboard input device is so nice, works as long as you can insert your own keyboard. I think I might start doing that, but the systems I remote in to are so different can not be sure emacs/vim is there.
I would recommend a Raspberry Pico as a fake keyboard, it has 2MB of storage. But that all falls apart when you are not allowed your own USB devices...
A long-term practical benefit: it will always be there for you.
In a world of corporate built software that may or may not exist in a few years, Emacs is an investment for life. It's the last editor, or whatever you use it for, that you'll ever need.
Any open source software project will last as long as the community lasts. If interest fades, then it will become worse/harder to run. It slowly becomes incompatible with newer systems, no one is making plugins, documentation becomes outdated.
True, but open source doesn't guarantee that the community will keep maintaining the project, even if there's interest from users. See Atom, etc.
If the main maintainer is a large company, they can decide to shift focus at any point and abandon the project, which puts its existence in jeopardy. (GNU) Emacs and Vim have been around for decades, and they're pretty much guaranteed to be around for many decades to come. As far as long-term investments go, learning and using these is the safest choice you can make.
Thanks for the link! I definitely agree with the author, NixOS is the only system that does The Right Thing but I cannot recommend it to anyone. I mean, I technically have recommended it to one person, but he's an ex-Arch user so... does it really count? :D
There are definitely still many rough edges and sharp corners!
For software developers and sysadmins with certain temperaments, though, I think it's definitely already a good fit. A lot of NixOS people come from Arch and Gentoo, and it works well for them— although Arch folks who are deeply aligned with its keep-it-simple philosophy are probably usually turned off by Nix.
> eliminating the filesystem (other than home and var) as a big ol' chunk of globally mutable state is so liberating
I'm about a week into the (very painful) process of switching to NixOS.
This is pretty much the promise of NixOS that got me interested, but it seems to be that it's not really true.
NixOS is just running a regular kernel that does regular linux security things. If you want AppArmor or SELinux you still have to configure it yourself.
If you want a sandbox on NixOS, your options are still bubblewrap/firejail, proot, or flatpak. Or of course full virtualization with libvirt.
The NixOS "containers" are just systemd-nspawn which (if I understand correctly) doesn't really offer more security than a properly permissioned user.
I suspect that if you installed a malicious binary in a NixOS package, you'd be just as compromised as you would installing something malicious from AUR.
The Nix store `/nix` is readonly except by the Nix build users. So, if you’re using Nix derivations for everything (the end goal), then rogue processes cannot interfere in any way with files outside of the chroot-like environment the build process creates.
The writable directories (your home dir and var, as the parent stated) are still “vulnerable” and a program can run anything they want of course (bound by typical Linux/Unix rules). Nix isn’t a containerization/sandboxing technology, but it does remove any fear of installing software overwriting files you wanted, including OS level (and kernel) upgrades.
> but it does remove any fear of installing software overwriting files you wanted, including OS level (and kernel) upgrades.
I understand that's "true" in a theoretical way because the store is read-only and it is all hashed. But the hashes aren't routinely checked by some kind of hypervisor, and root can still overwrite things in the store.
The "fear of installing software overwriting files you wanted" essentially comes down to config file management (unhappy accidents) and malware.
You should have config file management in git already, so I don't feel like NixOS needs to solve that. I was hoping it would solve the problem of random software being able to obtain root and not ransomware me, but it practically doesn't solve that any better than any other distro.
I want to be missing something. I've invested a lot of time learning about Nix for the last week and my system is finally working, but I just got to the sandboxing/security portion of my install and the threat model seems broken.
It seems like you misinterpreted the isolation that's advertised. There are security benefit but the isolation provided is predominantly about deterministic and reproducible runtime environments.
There's nothing particularly novel happening at the OS level compared to, say, Debian, but the difference is in how you arrived at the current state. You're free to sprinkle whatever other security bits you are fond of.
> deterministic and reproducible runtime environments
But is there a point to having what you believe are deterministic and reproducible runtimes if the environment used to build them doesn't protect against malware in a build from escaping into the build system?
These are good things and possibly make the struggle of NixOS worth it.
But the impression the community gives is very much that you can always rollback and everything is in its own sandbox, which is sort of true, but not at all true as soon as malware happens.
I've never really had a huge problem rolling back an ubuntu or arch update when something breaks, so I'm surprised at the amount of effort people are expending for just this feature with no additional security.
> But the impression the community gives is very much that you can always rollback and everything is in its own sandbox
I've never got that impression from the community, since day one I have the impression that it's rollbackable in a revision-control way instead of sandbox-like. The dependencies are actually global instead of sandboxed, Nix just makes it explicit which exact instance of which depends on which exact instance of which. That's not sandboxing at all.
Well to be honest it's actual only occurred to me that you could have that sandboxing impression after reading your comments and yeah I can understand you point.
> But the impression the community gives is very much that you can always rollback and everything is in its own sandbox, which is sort of true, but not at all true as soon as malware happens.
This is more the case with something like Hydra where you have a remote nix store and builder. Then even if you compromise a given nixos instance, they store stays isolated and intact.
So then if you are doing things right, you should be able to optionally back up any mutable data you need and then blow away the entire instance from scratch, creating a new one immediately after.
And bonus points if you can run a UEFI-over-HTTPS image on boot so that your boot image and config are always being delivered (and signed) fresh from a locked down server you control. That way if you want, on boot all nix-store content is validated with `nix store verify --all` before ever being loaded in any trusted context.
If you update ubuntu or arch and something breaks, you have to look at `dpkg.log` or `pacman.log` to see what updated, and then you might need to grab an old package from the archive and manually install it.
This intrinsically doesn't work because dependencies are globally namespaced. You can't have every version of a package installed simultaneously. In Debian, this is not remotely guaranteed. In Nix, it is.
> But the impression the community gives is very much that you can always rollback and everything is in its own sandbox, which is sort of true, but not at all true as soon as malware happens.
I can see why if you do malware research or something like that, you might hear about the rollbacks capability and get your hopes up in a certain way, but that's not really the use case.
NixOS is nice for running untrusted/suspect software in a few ways I can think of, though. (They don't exactly make NixOS a security tool, but I think maybe you could leverage them to build one.)
1. If a NixOS system is compromised, blowing it away and installing from clean media is cheap compared to on other operating systems, since you can retain more of your special configuration. Reinstalling is a bit more like a snapshot restore, at least in terms of the systemwide setup (users, running services and their configurations, and installed packages).
2. NixOS does make it really easy to spin up a clone of your running configuration in a VM instead of directly switching to that config on the local system.
3. The Nix sandbox is a pretty nice place to perform builds from big repos where packages can run arbitrary hooks at build time, like PyPI and NPM, since you can have it build in chroots with temporary build users, no network access, and maybe some other nice things.
There is also actually at least one Nix-based OS trying to achieve new things in security research out there, Spectrum: https://spectrum-os.org/
> I'm surprised at the amount of effort people are expending for just this feature with no additional security.
NixOS (and Nix more generally) doesn't have a singular feature like that driving its usage or development forward, even though rollbacks is a really nice one that will often come to mind when you ask NixOS users what they like about running NixOS.
But if I had to name contenders for the top one or two 'biggest differentiators' from other tools/paradigms that let you achieve similar effects, like other configuration management systems or copy-on-write filesystem snapshotting, I'd say it's the totalizing way that NixOS integrates those features.
Because NixOS generates the configuration of the whole system, it gets to avoid having to inspect most of the system's state, and generally handles the bits of transitioning between configurations that do require inspecting and reasoning about the state of the system quickly and pretty well. There's just a smaller surface area there.
Similarly, you're just more likely to be able to easily roll back with NixOS because those features are built into all of the normal system/package management operations, and leveraging those things is generally the path of least resistance to changing the system. You end up being able to count on them more 'by default'— you're much less likely to make an important change and have a gap without a snapshot. The garbage collection system makes clearing the unused ones data easier (imo). The general reproducibility also gives you multiple sort of layers of intervention for rolling back— even if you do collect all your past generations, your version control system becomes another reliable way of 'rolling back'. Both of those ways of going back and forth through iterations of your configuration can be further combined with similar interventions at other layers, like dotfile management via Home Manager, snapshotting filesystems for unmanaged files, selective persistence via something like Impermanence, etc. These things can add up a system where the kind of ad-hoc changes that might leak through your state management tools (snapper, etckeeper, dotfile management, etc.) become a radical departure from the way you regularly work.
Another differentiator here is maybe the generality: when you Nixify, you sometimes have to do a lot of work up front just to get things working on any deployment target, but the marginal work to go from a NixOS setup to some other kind— generating identical container images, preparing a VM for local use, running your config on macOS, partially sharing your desktop configuration with a server, letting a friend or colleague experiment with or debug your exact setup, preparing an AMI, etc.— is lower, and decreases with each further investment you make in the Nix universe. Different aspects of that inevitably end up being valuable, impressive, or delightful to different users. Taken alone, none of them might seem incomparably compelling over alternative approaches.
I can recommend reading the systemd manual entries (e.g. man systemd.exec).
SystemD meanwhile has a lot of options for managing a seccomp based sandbox, e.g. various protect options for the filesystem, mounting critical things as read-only, simulating a chroot with its own fake-root user etc.pp.
You can also manage the capabilities of a binary from there, so it's actually integrated down the kernel stack.
However, as you mentioned, the lack of an official "profile database" for common packages/software makes it just as useless as the other tools.
I wish we had a repo where all the things come together and people can just do something like "autosandbox apache2" and it will do the rest.
Thanks. I'm learning about this today and I'm beginning to suspect all the extra isolation software is not really useful if you configure AppArmor and SystemD properly per service.
The space between "full virtual machine" and "unix permission model" is vast and confusing.
I would have thought that because everything is hashed on nix, it would be trivial to spin up full "virtual machines" without consuming mountains of disk space, but that does not seem to be an option.
Sorry… I see no other way to contact you. I saw here in one of your previous comments that you were able to put 32gb of memory on a T440p… can you tell how!? If possible please dm me. Thanks.
> Looks nifty— basically it's the "let me try that in a container first" except on your live system with no setup to get it going.
Also, if I understand it correctly, it saves doing a potentially expensive operation twice: `try`ing actually performs the operation, ready to be committed; whereas, if I understand correctly, if you do something in a container and it works, then you still have to do it again "normally".
It would be nifty to save out "try"s in a sqlite/whatever and then curl install tries on other systems - such that you can easily clone certain setups between machines on a small/home network.
Also, if you can name tries, install-stacks - such that you can do "Try --name 'homeWebSever' [then do your tries here]
Then go to another machine and from your try repo, just type 'try install --name homeWebServer -- and it does whatever your try stack was
From my cursory understanding (based solely on the README), it seems that `try`s are just directories, so that they can automatically be slung around, without any need for a backing database:
> Sometimes, you might want to pre-execute a command and commit its result at a later time. Invoking try with the -n flag will return the overlay directory, without committing the result.
Also:
> curl install tries
… my brain instantly translated that to "curly fries". Built-in auto-correct!
I am intrigued by experimenting with nixos. For now, my current driver for workstations is arch, on servers rh-flavored or debian.
Everything is managed by ansible, plus persistent data is in backup. A lost server could reinstated quickly. Same for starting a new workstation.
I understand nixos can do this, too. And also I have read about an home manager in nixos, which I think is essential too.
Otoh, I'm not a developer. I think I do not need different versions of softwares. Maybe this is why I have not switched to nixos so far.
As cool as this is, it shouldn’t be necessary. A proper undo turns every command into the equivalent of a “try”, allowing users to experiment without fear of data loss. Everything in a computer user interface should be undoable.
This has been known for over 40 years, but the industry has been very slow to get the memo. The undo implementation on the iPhone is a weird joke. CLIs have barely even tried (with a few exceptions like Jef Raskin’s Canon Cat, a textual UI completely different from anything else I’ve ever seen).
To quote Tom Lehrer, "I am never forget the day..." [1]
I was working with WinXP, trying to reconfigure a machine's network for something abstruse. I was making changes in the network configuration dialog, and kept tentatively moving forward because there was always the "CANCEL" button available to revert my changes.
I made one last change, and suddenly the "CANCEL" button dimmed. It was as if you were creeping into the entranceway of a haunted castle, and after one more step the door slams shut behind you.
DO NOT RUN THAT or risk breaking Computer Fraud and Abuse Act! It sends a 'reverse ping' from Google to you, and it effectively hacks their server to initiate it.
Hah, clearly you have no idea of what you are talking! In order to ascertain the general level of humor of a programmer, you would have to assume all programmers are equivalent! Furthermore you would be assuming that there is one kind of humor, which is clearly a fallacy. Finally, the amount of "terribleness" you allude to cannot be quantified in such a way as to distinguish an objective level of quality. Therefore you are clearly wrong! Try doing some research before you share such a naive opinion in the future.
Agree in principle, but I'm not sure if it's possible to implement Undo for all shell operations. But there's a lot of existing systems out there, and anything which can be adopted incrementally is a big win.
Another shell variation I like is using trash rather than rm.
I agree that there should be universal undo, but I think by contrast the (informally) transactionality of the "try" model can be a big deal if your system is concurrent.
If you do a thing, find out it isn't what you wanted, and then undo it, while at the same some other process is observing the mutated state, that's potentially a much trickier mental model. If you undo a configuration change and in the meantime a background process has acted on the new configuration, how do you roll that back? Rewinding the timeline is one thing, but maybe throwing out all the work that happened with the wrong configuration is even worse than the status quo.
From the top of my head I am thinking about accidentally changing retention windows or bash history max size where the data loss is super indirect and you'd have to hunt down the undo button for a completely different process, or changing a log format so you end up with a file that's mixed json and plain text logs.
(Of course, "try" presumably applying the changes in a completely synthetic way after the fact could be its entire own can of worms in a very dynamic system if there's a risk changes are applied in the wrong order or skipping some atomicity dance.)
Fully agreed on this. The simple, default way of doing anything should be able to be undone. Hell, years ago Google even figured out how to give people an undo button on email! Yes, it's just a simple time delay but it makes such a huge difference because of how your state of mind changes between typing an email and hitting send. Or hell, maybe you just accidentally hit the send button.
Undo allows you to make the default behavior for every operation to be to just go and do it (or queue it up to be done). No need to have a confirmation that the user is going to quickly become conditioned to pressing yes on while also being just an annoyance 99% of the time.
We should also build a space elevator and figure out nuclear fusion. Unfortunately, many good ideas are easy to describe but difficult/impossible to do.
Sorry, I don't want a transactional layer to accompany every interaction I have with my Linux system (or most other systems, for that matter).
I want to choose when I need undo/redo and when I don't; a perpetually present transactional layer is just cruft for most of the time.
Furthermore, it opens up basic system interaction to the same fundamental questions that in-app undo systems have: do you branch? how deep is the history? how persistent is the history? etc.
a journaling file system is not the same as creating rollback points for every shell command prompt + enter.
also, if you're going to be strict about it, a true undo would need to handle modifications to anything in the /proc or /sys or /dev "filesystems", which are not covered by journaling-anything.
You can implement such a system-wide undo with filesystem snapshots using LVM2 or btrfs (or with backup/restore).
However, you also need to properly isolate software in containers or VMs since of course doing a system-wide undo on a system that is running a server will also revert the server state, which is usually disastrous.
I wonder if there's a world where we run nearly all user programs in isolated (containerized) environments, with minimal access to persistent storage, etc. In many ways it seems weird that we let every program access everything that your user has access to, by default.
That's awesome, it feels like an area worth exploring within the shell. If I had a magic wand and I could create an ideal shell experience, I'd love it if you could soft-execute a script in a notebook style environment where each command can be inspected and tweaked before committing the change. It feels like shells are still incredibly constrained and that they haven't evolved that much since the 90s.
The defaults are kind of crazy as well. Every OS should ship with a `trash` binary that puts a file in the Trash without actually deleting it, rather than recommending `rm`. I get some people are perfectly happy to play without guard rails, but I'm sure some of us would like a few more guard rails which we can tweak.
I think another similar innovation is how with NixOS you're supposed to be able to diff system changes between upgrades with ease. Which makes sense, since your OS config is usually based on the vendor default with a bunch of changes applied on top.
Interesting. How does it handle if a modified file has another change between the run and the confirmation? Understanding this case is really my only reservation to try this out.
Reminds me of when I was using bubblewrap (the tool Flatpak uses for sandboxing) to run programs from my main system on a tmpfs, to avoid changes to my main system.
It's certainly a neat idea, I like where it's coming from. I think it might be better if we just had an easier way to checkpoint and restore systems, so you don't have to "try" anything, you just "do", and ideally we have a very reliable way to just... Go back 2 minutes in time if something goes wrong. Something that isn't painful or slow and does work intuitively.
Filesystem checkpointing/snapshotting can take care of most of the ability to restore a system after a big problem. The next best thing is something like process checkpointing, and after that would be checkpointing a whole system's processes, which would be much more complicated and perhaps not worth it.
Filesystem checkpointing/snapshotting is one of the nice things about ZFS.
If the OS supports ZFS on boot drives, you can do checkpointed full system/kernel upgrades.
I like your idea about process checkpointing. Of course any side effects (eventually we're only talking about network in/out) would be nonreversible, but would be theoretically replayable (have to checkpoint the RTC too). The other side of the connection might have other ideas...
Honestly, this is only necessary when there is no virtualization, the system image is mutable, and not managed by configuration management.
With virtualization, snapshot and restore functionality makes this completely moot because it occurs outside of OS and captures the entire system state.
If the system image is mutable and not using configuration management, then system entropy is a real problem. Better have backups before "trying" anything you can't undo or has side-effects. You generally should have backups, configuration management, monitoring, and minimize attended commands issued and log them. Attended fiddling is the path to entropy and problems.
Neat! I’m definitely looking for something like this. My use case is running semi-trusted dev tools that come packaged. I need to run them, but I don’t want to trust them, but it’s too much of a faff to actually check, or run them in isolation (think dotnet tooling).
What I would love to see is a blocking IO on reads and writes, and network requests. Would let me see if some script is attempting to exfiltrate my home dir/ssh/gpg keys.
Not looking to fully secure, just for some more intuition about commands/scripts and their dependencies
The title reminds me of my time in a call center. At that time I was trying to earn an income working as an inbound and outbound agent for a large german telecommunications network operator. When I was fired, my team leader gave me the advice: "There's no trying, just do or don't do." (in German of course)
I think I had a moment of post-traumatic stress disorder while reading the title.
But binpash seems to be a very useful and nice piece of software that i will gladly give a try. :)
“If you're going to try, go all the way. Otherwise, don't even start. This could mean losing girlfriends, wives, relatives and maybe even your mind. It could mean not eating for three or four days. It could mean freezing on a park bench. It could mean jail. It could mean derision. It could mean mockery--isolation. Isolation is the gift. All the others are a test of your endurance, of how much you really want to do it. And, you'll do it, despite rejection and the worst odds. And it will be better than anything else you can imagine. If you're going to try, go all the way. There is no other feeling like that. You will be alone with the gods, and the nights will flame with fire. You will ride life straight to perfect laughter. It's the only good fight there is.”
Sunk cost fallacy. And the cause of the continuation of so many horrible wars. Take Quark's advice on the third rule of acquisition: https://youtu.be/hdQcGzbpN7s
> Never spend more for an acquisition than you have to.
$ try pip3 install libdash
Warning: Failed mounting /boot as an overlay, see /tmp/tmp.BrLiRj0Brb
Warning: Failed mounting /home as an overlay, see /tmp/tmp.BrLiRj0Brb
Warning: Failed mounting /snap as an overlay, see /tmp/tmp.BrLiRj0Brb
/tmp/tmp.c7hp4nI6lE: line 4: cd: /home/user: No such file or directory
I think there are some interesting use cases where I could see myself using this every day.
Say your a Django developer, using the dev server and a SQLite db. Every time you restart the dev server you can easily reset to the previous state, SQLite db reverted, any other media uploaded or modified changed back. All with no setup, no containers, just prepend "try".
I use a similar approach to have multiple isolated VS Code instances each in their own Nix shell, for easier development. It's surprisingly effective, and performance is just fine. I'm kind of curious how this interacts with Chromium's use of SQLite, but I haven't noticed any particular problems.
This is really nice! What a clever, but oh so obvious, idea. Love it.
Anyone know of an equivalent for MacOS?
Obviously Macs are missing some of the features this uses, but I wander if there any alternatives that could enable this sort of command on a Mac. I assume the lack of native Docker or equivalent is probably indicative of no.
This requires Linux specific features. The only way you are getting this on Macos is the same way you are getting "Docker on Macos" - by running a virtualised Linux machine.
I haven’t run this command but (editing as this was an incomplete thought accidentally posted) minikube and brew's version of docker work for me as a complete docker environment that works on my (intel) Mac.
https://sandboxie-plus.com/ This exists for Windows, but apps can detect and change their behavior when you are hooking all of the underlying calls for filesystem/registry.
Super simple with zfs. take a snapshot. run your command. you won't see what it did to anything non-zfs like temp filesystems (/var/run and so on) though.
I wish this were possible on macOS. You can see all file activity with dtrace but that's not nearly as easy as snapshotting.
This would be good with batch HPC systems, possibly both for checkpointing and idempotency. I was trying out to do something similar with fuse overlays, but had so much issues (2020) with this and rootless containers at the time.
This is awesome! I wonder if using the same techinque you could even even create a pseudo package manager. Once a pkg is installed via 'try', it keeps track of its filea ans remove or update such a pkg.
Neat trick. What about commands that impact the running state (or other external systems) and not just the file system? Would that also be rolled back? Take effect? (I'm not familiar with namespaces)
The example does `try pip install ...`. So it does execute the network requests, I think it just impacts the file system. I can't imagine a way for the tool to know what the network request would answer without actually running it, right?
oh,I actually implemented something similar few days ago, I was doing rpm build script to make repository on s3. repo is mounted with rclone and I'm using overlaysfs to make staging area where I can add new rpms while those are build and update repo metadata, but without risking that if at one point something fails to build i get inconsistent state (eg build succeeds for x86 but failed for aarch64). i when whole build process succeeded then I'm using rclone sync to commit changes back to s3
It would be fabulous to have something equivalent within SQL, instead of having to run a SELECT first and then hacking that into an UPDATE. Every time.
"transactions" do give that feature, but related to the nix/emacs discussion above, my gripe is that every rdbms seems to do it differently.
That said I forgot how overloaded the term "transactions" is and most kinds of "transactions" don't allow undo or preview modes. Like, there's the reserved word "transaction", and then there's the "transaction" that can literally describe any db/network request, or an exchange of money for goods and services.
When I do what the parent comment says, it's basically because the select statement is a "test"/preview
In some dbs you can do rollbacks or need to commit your db changes to affect the global state, but I haven't seen that universally + consistently implemented.
Just mind the isolation level of concurrent queries which may be running. By default you are probably fine since it's usually snapshot isolation mode (will only see committed results as of query start) but there are other modes that break this.
As an aside, I HATE the saying "Do, or do not. There is no try." Perhaps the best response I ever heard to this dribble was in the miniseries where piece-of-human-garbage Elizabeth Holmes is played by Amanda Seyfried in "The Dropout".
Professor Dr. Phyllis Gardner, played by Laurie Metcalf, responds, "That's all science is: trying."
Parallel Yoda quotes: https://books.google.com/books?id=pAjYCgAAQBAJ&pg=PT47#v=one... via: https://www.slate.com/articles/arts/cover_story/2015/12/star...
Thread with 1975 magazine explanation of Castaneda's advisor found partly via David Chapman: https://twitter.com/thadk/status/1670316860368199681?s=20