Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Try: run a command and inspect its effects before changing your live system (github.com/binpash)
1096 points by espressoRunner on June 24, 2023 | hide | past | favorite | 182 comments


As it happens, the Luke-Yoda phraseology like "Do or do not. There is no try" seems to be drawn by George Lucas from the popular late 1960s era books by Carlos Castaneda about a mentorship (~7 mil copies sold). That in turn seems to be about the apprenticeship that Castaneda had with his UCLA PhD advisor Harold Garfinkel:

Parallel Yoda quotes: https://books.google.com/books?id=pAjYCgAAQBAJ&pg=PT47#v=one... via: https://www.slate.com/articles/arts/cover_story/2015/12/star...

Thread with 1975 magazine explanation of Castaneda's advisor found partly via David Chapman: https://twitter.com/thadk/status/1670316860368199681?s=20


This looks cool, and the script actually seems pretty small, but I am not entirely sure I understand what it does.

My naive understanding skimming through the code:

1- It mounts the entire (?) filesystem in what I will call "overlayfs mount points" (doesn't it need root access for that?)

2- It executes the command on that "sandbox", which looks exactly like the rootfs, but is actually an overlay

3- Once the command returns, it knows what has been changed in the overlay, and shows it to the user. The user can then "commit" those changes (in which case I assume the command writes them from the overlayfs to the actual rootfs)

Everything that does not happen on the filesystem (e.g. a network call) won't be "tried", it will just happen.

Is that about right?


yes; the mount doesn't need root access because of user namespaces (which is one of the linux features with the most vulnerabilities next to BPF, but it's also quite handy...)

the sandbox is just a directory with the overlayfs content (whole new files for modified files, whiteouts for removals); there's some bugs e.g. removing a directory will create a whiteout that the apply script will try to rm without -r, and there's a handful of other failure modes I can think of (really doing network etc), but for simple commands it's a nice idea.


What about the checkinstall command? How does that work?


It seems that the focus is on sandboxing file system access. This is great if you already know that “stateful disk access” is the only thing the command does!

It would be interesting to think about how/whether to extend this to stateful network access. What would happen if I used try on a command that does something like create a GitHub pull request? Right now I believe the pull request gets created regardless of whether I commit the file system changes at the end. Some alternatives:

1. Block all network access, because the tool can’t reliably reason about what could be happening.

2. Block all network access, except accesses that look like HTTP GET requests (and maybe HEAD/OPTIONS requests).

3. Attempt to build some sort of request replay mechanism, so that if a command’s results aren’t committed, but the command is ~immediately re-run with slightly different options, HTTP responses for requests identical to those from the previous attempt are re-used without making the request a second time.

Obviously all three have downsides in terms of how useful/complex/brittle they make the command. But maybe worth pondering at the least.


4. Attempt to intercept network requests, so you can prompt the user with whether they want to continue running the command, or abort the connection.


If there's a mode to prompt on all system / file system access it would be a great tool for understanding what a command does to the system.


This is super cool, and I immediately thought of many use cases. For example:

* quickly find out which files are touched / installed by apt installing a certain package

* find out which log file, if any, a program writes to

* run one of those curl | bash installation commands and inspect exactly what it'll do without reading through a huge script


Indeed. A cardinal sin in software (in general, not just CLIs) is skipping or half-assing documentation of side effects. Combined with multi-step operations, partially committed operations can be nasty to debug, and even reason about.

This tool can bring a level of control that is typically only available in databases and VCSs, and make it a commodity.


* see how often a process writes to disk, preventing spindown


Pretty please, put a real description in first. This should be the first thing I read:

"try lets you run a command and inspect its effects before changing your live system."

And put snarky one-liners somewhere later. I only noticed what this project actually is after HN had changed the title of the post. And I had even opened Github page before as I usually do.

There are two very good places, at the top in project description and top of readme. I know the temptation to get noticed using a meme, but let me judge the meme after I've made the decision if this is something I would want to use my precious time for. Thanks.


he is not running in a container - he is not running the command in isolation from the system, he is running on the current system. The script is mounting an overlay file system for each root directory on the current system, then looking at what has been added as a new layer - that's what the command is supposed to add to the running system.

Pure genius.

See https://github.com/binpash/try/blob/b2df6b650cb2b58951563174...


Can you do this as a non-root user?


I don't know about the script but in theory yes. As long as non-root namespaces are enabled. (Enabled by default on most distos these days)


didn't try it out yet (running on osx right now). But it should work with non-root users. The underlying file system is read-only.


Looks nifty— basically it's the "let me try that in a container first" except on your live system with no setup to get it going.

That said, as a NixOS user for the last year or so, I think I've gotten a bit spoiled by just not worrying about this kind of thing any more— eliminating the filesystem (other than home and var) as a big ol' chunk of globally mutable state is so liberating.


I feel that one day I should write about this curse that NixOS brings into your life when you start enjoying it : you cannot go back to different systems but at the same time you (at least I) cannot vouch and recommend it to others as the languages and constructs (Flakes with a space or an utf-8 character in the path ? Here is a rabbit hole you can go down with) are just so byzantine and painful to work with but oh boy do they work... A crystal prison, nice but with sharp corners everywhere...


It's the same as Emacs for me. I don't want to leave it but I don't recommend it to anyone.


Why you don’t recommend it to anyone? May be a proper recommendation is what I need to really be engaged with it ? The same for Nix.

Please more details about the practical benefits because I kind of see in general why it can be good but there is still certain lack of the real practical demonstrations from those who use it daily.


> Why you don’t recommend it to anyone?

I don't recommend emacs because the vast majority of packages have "Lisp Incompletion Syndrome"--they get the easy 80% right and leave you to get bitten by the difficult 20%.

lsp-mode and tramp still have bad Heisenbug interactions even after you get the correct incantations to make them not crash. Other packages are similar.

There are a few very core packages that work well. Everything else is in sufficient disrepair that you will have to pick the broken pieces up off the floor at fairly regular intervals.


Try out Doom! You don't have to use evil-mode either if that's not your thing (I don't use it), just disable :editor evil in your init.el.

Personally I kind of view it like having a custom mechanical keyboard. Why not invest some time and money into making your tools more ergonomic and enjoyable? Yeah any keyboard will work, and any text editor will edit documents.

Text-editing aside, magit and org-mode are particularly nice in Emacs. Plus there's just something comforting knowing that Emacs will always be there for me, just the way I set it up.


> Why not invest some time and money into making your tools more ergonomic and enjoyable?

I did that for many years. After switching from one machine to the next, one operating system to the next, one IDE to the next, everything constantly changing, year after year - I found myself in a job where I had to reinstall the OS and everything on it from scratch, every two weeks, for a year, because... well. Because! By the time that was over, I had given up customizing much of anything at all, and that has been working out all right ever since.


Why not keep your config externally available and reuse when setting up again? With Emacs that is easily possible.


That would not have helped much with the jobs where I needed to use some proprietary IDE, or which involved some OS on which Emacs was poorly supported.

(If I had already been an Emacs fan, I suppose I could have found some way to forcibly bodge things together and use my preferred editor regardless: but I'm afraid it's never appealed to me.)


Tangible (e.g. file- or even better text-file-based) configuration helps here—this is less a fault of customization in general and more of opaque configuration systems.


Reinstall every other week. Haven't done that since Windows 98.


> Why not invest some time and money into making your tools more ergonomic and enjoyable?

Because unless you use just one system daily or even weekly, customizations are nothing an annoyance since it’s unlikely you can clone every customization across every system you use daily.


> since it’s unlikely you can clone every customization across every system you use daily.

But you can, even for physically distinct machines: just package-up your emacs/environment/shell/etc profile into a bash-bunny USB stick, such that the bunny uses its keyboard emulation to type-out and run the commands that load your profile into your current machine.


Unlike most things in the comments above, I have no clue what you are talking about and highly doubt many people do.


Not everyone works on the machine they are sitting in front of.


Right, that's why I said to use a Bash Bunny: it's a USB mass storage stick that can also emulate a USB keyboard (there's a few buttons on the stick to switch modes), so you'd be at the computer, open up a terminal and open the editor for a new bash/emacs config file, then plug-in the stick in keyboard mode and press the start button on the stick and after a few seconds it should have dumped kilobytes of data into the file which you can then save locally and so take your bash/shell/emacs/etc settings with you, even without needing USB mass-storage support (given many companies disable USB mass storage to mitigate data-exfiltration but, of course, need to allow USB mice and keyboards).


That's why storing your conf in a keyboard input device is so nice, works as long as you can insert your own keyboard. I think I might start doing that, but the systems I remote in to are so different can not be sure emacs/vim is there.

I would recommend a Raspberry Pico as a fake keyboard, it has 2MB of storage. But that all falls apart when you are not allowed your own USB devices...


bring your $HOME/.emacs.d around with you, pretty easy :)


A long-term practical benefit: it will always be there for you.

In a world of corporate built software that may or may not exist in a few years, Emacs is an investment for life. It's the last editor, or whatever you use it for, that you'll ever need.


Any open source software project will last as long as the community lasts. If interest fades, then it will become worse/harder to run. It slowly becomes incompatible with newer systems, no one is making plugins, documentation becomes outdated.


True, but open source doesn't guarantee that the community will keep maintaining the project, even if there's interest from users. See Atom, etc.

If the main maintainer is a large company, they can decide to shift focus at any point and abandon the project, which puts its existence in jeopardy. (GNU) Emacs and Vim have been around for decades, and they're pretty much guaranteed to be around for many decades to come. As far as long-term investments go, learning and using these is the safest choice you can make.


Are you kidding me? Who wouldn’t want to emulate an OS or play games directly from their text editor?


I'm loving Dunnet in Emacs. Nobody spoil me on how it ends, plz.


this is exactly the software that came to mind for me.


You mean like this? https://blog.wesleyac.com/posts/the-curse-of-nixos

You should still write your commentary on the idea, though!


Thanks for the link! I definitely agree with the author, NixOS is the only system that does The Right Thing but I cannot recommend it to anyone. I mean, I technically have recommended it to one person, but he's an ex-Arch user so... does it really count? :D


There are definitely still many rough edges and sharp corners!

For software developers and sysadmins with certain temperaments, though, I think it's definitely already a good fit. A lot of NixOS people come from Arch and Gentoo, and it works well for them— although Arch folks who are deeply aligned with its keep-it-simple philosophy are probably usually turned off by Nix.


Heh, I love a good dig at GCL.


Funny you mention spaces in flake paths. I was hit by that bug and bumped the PR to address it, and it might be getting merged in soon!


That's the problem ... So much time to a pleasing but inefficient payoff


Or even worst: when you cannot switch out of it but you're not smart to package proprietary software that's not available in the repositories.


there's always not using flakes


> eliminating the filesystem (other than home and var) as a big ol' chunk of globally mutable state is so liberating

I'm about a week into the (very painful) process of switching to NixOS.

This is pretty much the promise of NixOS that got me interested, but it seems to be that it's not really true.

NixOS is just running a regular kernel that does regular linux security things. If you want AppArmor or SELinux you still have to configure it yourself.

If you want a sandbox on NixOS, your options are still bubblewrap/firejail, proot, or flatpak. Or of course full virtualization with libvirt.

The NixOS "containers" are just systemd-nspawn which (if I understand correctly) doesn't really offer more security than a properly permissioned user.

I suspect that if you installed a malicious binary in a NixOS package, you'd be just as compromised as you would installing something malicious from AUR.


The Nix store `/nix` is readonly except by the Nix build users. So, if you’re using Nix derivations for everything (the end goal), then rogue processes cannot interfere in any way with files outside of the chroot-like environment the build process creates.

The writable directories (your home dir and var, as the parent stated) are still “vulnerable” and a program can run anything they want of course (bound by typical Linux/Unix rules). Nix isn’t a containerization/sandboxing technology, but it does remove any fear of installing software overwriting files you wanted, including OS level (and kernel) upgrades.


>...but it does remove any fear of installing software overwriting files you wanted, including OS level (and kernel) upgrades.

I guess that is nice to know my Python installation cannot be deleted by malware, but I am more concerned about a process stealing my .ssh keys.


Honestly, exactly recreating my byzantine python installation(s) would probably be impossible [0]

(In all seriousness, after switching to Arch I've actually solved this problem by simply installing python packages I need from the AUR)

[0]: https://xkcd.com/1987/


> but it does remove any fear of installing software overwriting files you wanted, including OS level (and kernel) upgrades.

I understand that's "true" in a theoretical way because the store is read-only and it is all hashed. But the hashes aren't routinely checked by some kind of hypervisor, and root can still overwrite things in the store.

The "fear of installing software overwriting files you wanted" essentially comes down to config file management (unhappy accidents) and malware.

You should have config file management in git already, so I don't feel like NixOS needs to solve that. I was hoping it would solve the problem of random software being able to obtain root and not ransomware me, but it practically doesn't solve that any better than any other distro.

I want to be missing something. I've invested a lot of time learning about Nix for the last week and my system is finally working, but I just got to the sandboxing/security portion of my install and the threat model seems broken.


It seems like you misinterpreted the isolation that's advertised. There are security benefit but the isolation provided is predominantly about deterministic and reproducible runtime environments.

There's nothing particularly novel happening at the OS level compared to, say, Debian, but the difference is in how you arrived at the current state. You're free to sprinkle whatever other security bits you are fond of.


> deterministic and reproducible runtime environments

But is there a point to having what you believe are deterministic and reproducible runtimes if the environment used to build them doesn't protect against malware in a build from escaping into the build system?


There are multiple non security reasons to want deterministic and reproducible runtime environments.

1. Everything you need to describe and build a project is obtained by checking it out.

2. No more, works on my machine but not yours problems.

3. No more weird library dependency conflicts.

I've run into all of these and I find them all frustrating when it happens.


These are good things and possibly make the struggle of NixOS worth it.

But the impression the community gives is very much that you can always rollback and everything is in its own sandbox, which is sort of true, but not at all true as soon as malware happens.

I've never really had a huge problem rolling back an ubuntu or arch update when something breaks, so I'm surprised at the amount of effort people are expending for just this feature with no additional security.


> But the impression the community gives is very much that you can always rollback and everything is in its own sandbox

I've never got that impression from the community, since day one I have the impression that it's rollbackable in a revision-control way instead of sandbox-like. The dependencies are actually global instead of sandboxed, Nix just makes it explicit which exact instance of which depends on which exact instance of which. That's not sandboxing at all.

Well to be honest it's actual only occurred to me that you could have that sandboxing impression after reading your comments and yeah I can understand you point.


> But the impression the community gives is very much that you can always rollback and everything is in its own sandbox, which is sort of true, but not at all true as soon as malware happens.

This is more the case with something like Hydra where you have a remote nix store and builder. Then even if you compromise a given nixos instance, they store stays isolated and intact.

So then if you are doing things right, you should be able to optionally back up any mutable data you need and then blow away the entire instance from scratch, creating a new one immediately after.

And bonus points if you can run a UEFI-over-HTTPS image on boot so that your boot image and config are always being delivered (and signed) fresh from a locked down server you control. That way if you want, on boot all nix-store content is validated with `nix store verify --all` before ever being loaded in any trusted context.


"rolling back" an Ubuntu update? Considering how atomic rollbacks work on NixOS, what do you mean by this in the context of Ubuntu?


If you update ubuntu or arch and something breaks, you have to look at `dpkg.log` or `pacman.log` to see what updated, and then you might need to grab an old package from the archive and manually install it.


This intrinsically doesn't work because dependencies are globally namespaced. You can't have every version of a package installed simultaneously. In Debian, this is not remotely guaranteed. In Nix, it is.


> But the impression the community gives is very much that you can always rollback and everything is in its own sandbox, which is sort of true, but not at all true as soon as malware happens.

I can see why if you do malware research or something like that, you might hear about the rollbacks capability and get your hopes up in a certain way, but that's not really the use case.

NixOS is nice for running untrusted/suspect software in a few ways I can think of, though. (They don't exactly make NixOS a security tool, but I think maybe you could leverage them to build one.)

1. If a NixOS system is compromised, blowing it away and installing from clean media is cheap compared to on other operating systems, since you can retain more of your special configuration. Reinstalling is a bit more like a snapshot restore, at least in terms of the systemwide setup (users, running services and their configurations, and installed packages).

2. NixOS does make it really easy to spin up a clone of your running configuration in a VM instead of directly switching to that config on the local system.

3. The Nix sandbox is a pretty nice place to perform builds from big repos where packages can run arbitrary hooks at build time, like PyPI and NPM, since you can have it build in chroots with temporary build users, no network access, and maybe some other nice things.

There is also actually at least one Nix-based OS trying to achieve new things in security research out there, Spectrum: https://spectrum-os.org/

> I'm surprised at the amount of effort people are expending for just this feature with no additional security.

NixOS (and Nix more generally) doesn't have a singular feature like that driving its usage or development forward, even though rollbacks is a really nice one that will often come to mind when you ask NixOS users what they like about running NixOS.

But if I had to name contenders for the top one or two 'biggest differentiators' from other tools/paradigms that let you achieve similar effects, like other configuration management systems or copy-on-write filesystem snapshotting, I'd say it's the totalizing way that NixOS integrates those features.

Because NixOS generates the configuration of the whole system, it gets to avoid having to inspect most of the system's state, and generally handles the bits of transitioning between configurations that do require inspecting and reasoning about the state of the system quickly and pretty well. There's just a smaller surface area there.

Similarly, you're just more likely to be able to easily roll back with NixOS because those features are built into all of the normal system/package management operations, and leveraging those things is generally the path of least resistance to changing the system. You end up being able to count on them more 'by default'— you're much less likely to make an important change and have a gap without a snapshot. The garbage collection system makes clearing the unused ones data easier (imo). The general reproducibility also gives you multiple sort of layers of intervention for rolling back— even if you do collect all your past generations, your version control system becomes another reliable way of 'rolling back'. Both of those ways of going back and forth through iterations of your configuration can be further combined with similar interventions at other layers, like dotfile management via Home Manager, snapshotting filesystems for unmanaged files, selective persistence via something like Impermanence, etc. These things can add up a system where the kind of ad-hoc changes that might leak through your state management tools (snapper, etckeeper, dotfile management, etc.) become a radical departure from the way you regularly work.

Another differentiator here is maybe the generality: when you Nixify, you sometimes have to do a lot of work up front just to get things working on any deployment target, but the marginal work to go from a NixOS setup to some other kind— generating identical container images, preparing a VM for local use, running your config on macOS, partially sharing your desktop configuration with a server, letting a friend or colleague experiment with or debug your exact setup, preparing an AMI, etc.— is lower, and decreases with each further investment you make in the Nix universe. Different aspects of that inevitably end up being valuable, impressive, or delightful to different users. Taken alone, none of them might seem incomparably compelling over alternative approaches.


I can recommend reading the systemd manual entries (e.g. man systemd.exec).

SystemD meanwhile has a lot of options for managing a seccomp based sandbox, e.g. various protect options for the filesystem, mounting critical things as read-only, simulating a chroot with its own fake-root user etc.pp.

You can also manage the capabilities of a binary from there, so it's actually integrated down the kernel stack.

However, as you mentioned, the lack of an official "profile database" for common packages/software makes it just as useless as the other tools.

I wish we had a repo where all the things come together and people can just do something like "autosandbox apache2" and it will do the rest.


Thanks. I'm learning about this today and I'm beginning to suspect all the extra isolation software is not really useful if you configure AppArmor and SystemD properly per service.

The space between "full virtual machine" and "unix permission model" is vast and confusing.

I would have thought that because everything is hashed on nix, it would be trivial to spin up full "virtual machines" without consuming mountains of disk space, but that does not seem to be an option.


Firejail does this. The profile database is the two "profile" directories in https://github.com/netblue30/firejail/tree/master/etc


Sorry… I see no other way to contact you. I saw here in one of your previous comments that you were able to put 32gb of memory on a T440p… can you tell how!? If possible please dm me. Thanks.


> If you want a sandbox on NixOS, your options are still [...]

Exactly, I have brought this up in the Nix forum a few times, but developers don't seem to see the value.

Really frustrating and a bit ironic given that sandboxing should be quite appealing to a crowd that is attracted to Nix.

Guix does implement some a variety of sandboxing options.


NixOS isn't a security solution.

If you expect that, you will be disappointed.

It's isolated for good actors.


> Looks nifty— basically it's the "let me try that in a container first" except on your live system with no setup to get it going.

Also, if I understand it correctly, it saves doing a potentially expensive operation twice: `try`ing actually performs the operation, ready to be committed; whereas, if I understand correctly, if you do something in a container and it works, then you still have to do it again "normally".


It would be nifty to save out "try"s in a sqlite/whatever and then curl install tries on other systems - such that you can easily clone certain setups between machines on a small/home network.

Also, if you can name tries, install-stacks - such that you can do "Try --name 'homeWebSever' [then do your tries here]

Then go to another machine and from your try repo, just type 'try install --name homeWebServer -- and it does whatever your try stack was


This is basically what you can get with Nix, sharing a binary cache between machines and getting an identical dev environment / package setup.


From my cursory understanding (based solely on the README), it seems that `try`s are just directories, so that they can automatically be slung around, without any need for a backing database:

> Sometimes, you might want to pre-execute a command and commit its result at a later time. Invoking try with the -n flag will return the overlay directory, without committing the result.

Also:

> curl install tries

… my brain instantly translated that to "curly fries". Built-in auto-correct!


Makes more, simpler, sense...


I wanted to use NixOS but it just proved to be too much of a PITA to get going with :-/

Hopefully someone will find a way to take some lessons from it while leaving all the friction behind.


The the Nix language itself is the biggest PITA aspect of NixOS. It's just so unnecessarily awkward.


Agree, NixOS without the Nix would be awesome.


So guix?


I am intrigued by experimenting with nixos. For now, my current driver for workstations is arch, on servers rh-flavored or debian. Everything is managed by ansible, plus persistent data is in backup. A lost server could reinstated quickly. Same for starting a new workstation. I understand nixos can do this, too. And also I have read about an home manager in nixos, which I think is essential too. Otoh, I'm not a developer. I think I do not need different versions of softwares. Maybe this is why I have not switched to nixos so far.


As cool as this is, it shouldn’t be necessary. A proper undo turns every command into the equivalent of a “try”, allowing users to experiment without fear of data loss. Everything in a computer user interface should be undoable.

This has been known for over 40 years, but the industry has been very slow to get the memo. The undo implementation on the iPhone is a weird joke. CLIs have barely even tried (with a few exceptions like Jef Raskin’s Canon Cat, a textual UI completely different from anything else I’ve ever seen).

Maybe one day…


To quote Tom Lehrer, "I am never forget the day..." [1]

I was working with WinXP, trying to reconfigure a machine's network for something abstruse. I was making changes in the network configuration dialog, and kept tentatively moving forward because there was always the "CANCEL" button available to revert my changes.

I made one last change, and suddenly the "CANCEL" button dimmed. It was as if you were creeping into the entranceway of a haunted castle, and after one more step the door slams shut behind you.

20 years later and I'm still scarred...

[1] https://www.youtube.com/watch?v=gXlfXirQF3A&t=24s


Heh, it's like when there are Ok, Apply, cancel options and cancel absolutely does not undo or negate having pressed Apply.


me trying to figure out how to undo my ping to google.com


me serving google a court order to remove logs with my ping in them

All in a day's work.


don't forget all the routers in the path between your consumer level Internet connection and Google. Level3 and telia need their court orders too!


If you rm -rf / you will also remove the undo history and script ¯\_(ツ)_/¯


ping -c -1 google.com


DO NOT RUN THAT or risk breaking Computer Fraud and Abuse Act! It sends a 'reverse ping' from Google to you, and it effectively hacks their server to initiate it.


No, it doesn't. It just sends 1 ECHO_REQUEST packet. If this was sarcasm I could not detect it


Above: someone tries humor on Hacker News. It goes exactly as expected.


Is it possible you and your parent (my child) are deepening the sarcasm to indistinguishable degrees? I think not, but I like to imagine.


It seems to give up before sending anything at all:

    ping: invalid argument: '-1': out of range: 1 <= value <= 9223372036854775807


Please explain


Programmer humor is generally terrible


Hah, clearly you have no idea of what you are talking! In order to ascertain the general level of humor of a programmer, you would have to assume all programmers are equivalent! Furthermore you would be assuming that there is one kind of humor, which is clearly a fallacy. Finally, the amount of "terribleness" you allude to cannot be quantified in such a way as to distinguish an objective level of quality. Therefore you are clearly wrong! Try doing some research before you share such a naive opinion in the future.


qed


The `-c -1` reads like it's requesting sending negative one pings, hence the reversal.


A gnip!


In theory: the router could capture such ICMP packets, drop them. Save them as a pcap and then mail them back to you.

Maybe via a Pigeon?


Hm, what is this "try" command if it's not the first step towards a proper undo? I've never seen a better increment towards that.


A particular Billy Madison quotes comes to mind.

Cars are cool but humans should be capable of flight, I guess only birds got the memo about wings.


Agree in principle, but I'm not sure if it's possible to implement Undo for all shell operations. But there's a lot of existing systems out there, and anything which can be adopted incrementally is a big win.

Another shell variation I like is using trash rather than rm.


I agree that there should be universal undo, but I think by contrast the (informally) transactionality of the "try" model can be a big deal if your system is concurrent.

If you do a thing, find out it isn't what you wanted, and then undo it, while at the same some other process is observing the mutated state, that's potentially a much trickier mental model. If you undo a configuration change and in the meantime a background process has acted on the new configuration, how do you roll that back? Rewinding the timeline is one thing, but maybe throwing out all the work that happened with the wrong configuration is even worse than the status quo.

From the top of my head I am thinking about accidentally changing retention windows or bash history max size where the data loss is super indirect and you'd have to hunt down the undo button for a completely different process, or changing a log format so you end up with a file that's mixed json and plain text logs.

(Of course, "try" presumably applying the changes in a completely synthetic way after the fact could be its entire own can of worms in a very dynamic system if there's a risk changes are applied in the wrong order or skipping some atomicity dance.)


Fully agreed on this. The simple, default way of doing anything should be able to be undone. Hell, years ago Google even figured out how to give people an undo button on email! Yes, it's just a simple time delay but it makes such a huge difference because of how your state of mind changes between typing an email and hitting send. Or hell, maybe you just accidentally hit the send button.

Undo allows you to make the default behavior for every operation to be to just go and do it (or queue it up to be done). No need to have a confirmation that the user is going to quickly become conditioned to pressing yes on while also being just an annoyance 99% of the time.


We should also build a space elevator and figure out nuclear fusion. Unfortunately, many good ideas are easy to describe but difficult/impossible to do.


Sorry, I don't want a transactional layer to accompany every interaction I have with my Linux system (or most other systems, for that matter).

I want to choose when I need undo/redo and when I don't; a perpetually present transactional layer is just cruft for most of the time.

Furthermore, it opens up basic system interaction to the same fundamental questions that in-app undo systems have: do you branch? how deep is the history? how persistent is the history? etc.


> I don't want a transactional layer to accompany every interaction I have with my Linux system (or most other systems, for that matter)

I guess you don't use journaling filesystems then ?


a journaling file system is not the same as creating rollback points for every shell command prompt + enter.

also, if you're going to be strict about it, a true undo would need to handle modifications to anything in the /proc or /sys or /dev "filesystems", which are not covered by journaling-anything.


You can implement such a system-wide undo with filesystem snapshots using LVM2 or btrfs (or with backup/restore).

However, you also need to properly isolate software in containers or VMs since of course doing a system-wide undo on a system that is running a server will also revert the server state, which is usually disastrous.


That made me curious. I think here is an emulator for Canon Cat: https://archive.org/details/canoncat


Undo needs a ton, possibly infinite free space.


I'm too much of a cowboy for this, but I do recognise this is very cool.


I wonder if there's a world where we run nearly all user programs in isolated (containerized) environments, with minimal access to persistent storage, etc. In many ways it seems weird that we let every program access everything that your user has access to, by default.


Android works like you describe, for the most part. It would be great to have such a permissions model on the desktop.


Android still has a global FS that programs can write to if they have the permission, in Fuschia there is no global FS, which is as sandboxed it gets.


try is written in posix shell, it's cowboy too.


Interesting. It’s like nixOS but for individual fs changes. I’ll have to give this a try.

off topic: Theranos CEO that was recently convicted really ruined Star Wars quotes for me

https://www.thecut.com/2019/03/the-most-bizarre-moments-from...


That's awesome, it feels like an area worth exploring within the shell. If I had a magic wand and I could create an ideal shell experience, I'd love it if you could soft-execute a script in a notebook style environment where each command can be inspected and tweaked before committing the change. It feels like shells are still incredibly constrained and that they haven't evolved that much since the 90s.

The defaults are kind of crazy as well. Every OS should ship with a `trash` binary that puts a file in the Trash without actually deleting it, rather than recommending `rm`. I get some people are perfectly happy to play without guard rails, but I'm sure some of us would like a few more guard rails which we can tweak.

I think another similar innovation is how with NixOS you're supposed to be able to diff system changes between upgrades with ease. Which makes sense, since your OS config is usually based on the vendor default with a bunch of changes applied on top.


Linux/Gnome has "gio trash" as the trash command. It's available yet mostly unused I think.


Interesting. How does it handle if a modified file has another change between the run and the confirmation? Understanding this case is really my only reservation to try this out.


it doesn't, will just overwrite the file with its version (cp -af)


At least I'd prefer for it to fail if something changed the files underneath me, and prompt me if I want to retry for real.


Reminds me of when I was using bubblewrap (the tool Flatpak uses for sandboxing) to run programs from my main system on a tmpfs, to avoid changes to my main system.


It's certainly a neat idea, I like where it's coming from. I think it might be better if we just had an easier way to checkpoint and restore systems, so you don't have to "try" anything, you just "do", and ideally we have a very reliable way to just... Go back 2 minutes in time if something goes wrong. Something that isn't painful or slow and does work intuitively.

Filesystem checkpointing/snapshotting can take care of most of the ability to restore a system after a big problem. The next best thing is something like process checkpointing, and after that would be checkpointing a whole system's processes, which would be much more complicated and perhaps not worth it.


Filesystem checkpointing/snapshotting is one of the nice things about ZFS.

If the OS supports ZFS on boot drives, you can do checkpointed full system/kernel upgrades.

I like your idea about process checkpointing. Of course any side effects (eventually we're only talking about network in/out) would be nonreversible, but would be theoretically replayable (have to checkpoint the RTC too). The other side of the connection might have other ideas...

Sounds like a dream debugging tool though.


Honestly, this is only necessary when there is no virtualization, the system image is mutable, and not managed by configuration management.

With virtualization, snapshot and restore functionality makes this completely moot because it occurs outside of OS and captures the entire system state.

If the system image is mutable and not using configuration management, then system entropy is a real problem. Better have backups before "trying" anything you can't undo or has side-effects. You generally should have backups, configuration management, monitoring, and minimize attended commands issued and log them. Attended fiddling is the path to entropy and problems.


Neat! I’m definitely looking for something like this. My use case is running semi-trusted dev tools that come packaged. I need to run them, but I don’t want to trust them, but it’s too much of a faff to actually check, or run them in isolation (think dotnet tooling).

What I would love to see is a blocking IO on reads and writes, and network requests. Would let me see if some script is attempting to exfiltrate my home dir/ssh/gpg keys.

Not looking to fully secure, just for some more intuition about commands/scripts and their dependencies


The title reminds me of my time in a call center. At that time I was trying to earn an income working as an inbound and outbound agent for a large german telecommunications network operator. When I was fired, my team leader gave me the advice: "There's no trying, just do or don't do." (in German of course)

I think I had a moment of post-traumatic stress disorder while reading the title.

But binpash seems to be a very useful and nice piece of software that i will gladly give a try. :)


This is what "Elizabeth Holmes" beleived in. I think this is a quote from Star Wars.


“If you're going to try, go all the way. Otherwise, don't even start. This could mean losing girlfriends, wives, relatives and maybe even your mind. It could mean not eating for three or four days. It could mean freezing on a park bench. It could mean jail. It could mean derision. It could mean mockery--isolation. Isolation is the gift. All the others are a test of your endurance, of how much you really want to do it. And, you'll do it, despite rejection and the worst odds. And it will be better than anything else you can imagine. If you're going to try, go all the way. There is no other feeling like that. You will be alone with the gods, and the nights will flame with fire. You will ride life straight to perfect laughter. It's the only good fight there is.”

– Charles Bukowski


Sunk cost fallacy. And the cause of the continuation of so many horrible wars. Take Quark's advice on the third rule of acquisition: https://youtu.be/hdQcGzbpN7s

> Never spend more for an acquisition than you have to.


Doesn't work on popos:

    $ try pip3 install libdash
    Warning: Failed mounting /boot as an overlay, see /tmp/tmp.BrLiRj0Brb
    Warning: Failed mounting /home as an overlay, see /tmp/tmp.BrLiRj0Brb
    Warning: Failed mounting /snap as an overlay, see /tmp/tmp.BrLiRj0Brb
    /tmp/tmp.c7hp4nI6lE: line 4: cd: /home/user: No such file or directory


Could be related to https://github.com/binpash/try/issues/38 Could you see if the 'mount-fix' branch works?


Same for me on Ubuntu.


Such a simple concept. Not something I'll reach for every day but I can absolutely see myself using this a few times a year. Thanks for sharing!


I think there are some interesting use cases where I could see myself using this every day.

Say your a Django developer, using the dev server and a SQLite db. Every time you restart the dev server you can easily reset to the previous state, SQLite db reverted, any other media uploaded or modified changed back. All with no setup, no containers, just prepend "try".


I use a similar approach to have multiple isolated VS Code instances each in their own Nix shell, for easier development. It's surprisingly effective, and performance is just fine. I'm kind of curious how this interacts with Chromium's use of SQLite, but I haven't noticed any particular problems.


This is really nice! What a clever, but oh so obvious, idea. Love it.

Anyone know of an equivalent for MacOS?

Obviously Macs are missing some of the features this uses, but I wander if there any alternatives that could enable this sort of command on a Mac. I assume the lack of native Docker or equivalent is probably indicative of no.


This requires Linux specific features. The only way you are getting this on Macos is the same way you are getting "Docker on Macos" - by running a virtualised Linux machine.


I haven’t run this command but (editing as this was an incomplete thought accidentally posted) minikube and brew's version of docker work for me as a complete docker environment that works on my (intel) Mac.


https://sandboxie-plus.com/ This exists for Windows, but apps can detect and change their behavior when you are hooking all of the underlying calls for filesystem/registry.


Super simple with zfs. take a snapshot. run your command. you won't see what it did to anything non-zfs like temp filesystems (/var/run and so on) though.

I wish this were possible on macOS. You can see all file activity with dtrace but that's not nearly as easy as snapshotting.


This would be good with batch HPC systems, possibly both for checkpointing and idempotency. I was trying out to do something similar with fuse overlays, but had so much issues (2020) with this and rootless containers at the time.


This is awesome! I wonder if using the same techinque you could even even create a pseudo package manager. Once a pkg is installed via 'try', it keeps track of its filea ans remove or update such a pkg.


Neat trick. What about commands that impact the running state (or other external systems) and not just the file system? Would that also be rolled back? Take effect? (I'm not familiar with namespaces)


The example does `try pip install ...`. So it does execute the network requests, I think it just impacts the file system. I can't imagine a way for the tool to know what the network request would answer without actually running it, right?


oh,I actually implemented something similar few days ago, I was doing rpm build script to make repository on s3. repo is mounted with rclone and I'm using overlaysfs to make staging area where I can add new rpms while those are build and update repo metadata, but without risking that if at one point something fails to build i get inconsistent state (eg build succeeds for x86 but failed for aarch64). i when whole build process succeeded then I'm using rclone sync to commit changes back to s3


I love that their demo is to finally give me a dry run command for pip.


Since pip v22.2, pip install --dry-run finally exists. It will still perform downloads but it will dry run the install part.


Amazing, good to know, thanks


It would be fabulous to have something equivalent within SQL, instead of having to run a SELECT first and then hacking that into an UPDATE. Every time.


Don't transactions give that feature?


"transactions" do give that feature, but related to the nix/emacs discussion above, my gripe is that every rdbms seems to do it differently.

That said I forgot how overloaded the term "transactions" is and most kinds of "transactions" don't allow undo or preview modes. Like, there's the reserved word "transaction", and then there's the "transaction" that can literally describe any db/network request, or an exchange of money for goods and services.

When I do what the parent comment says, it's basically because the select statement is a "test"/preview

In some dbs you can do rollbacks or need to commit your db changes to affect the global state, but I haven't seen that universally + consistently implemented.


Is there an overlayfs that copies to the upper fs on read? I know of catfs, but that proxies reads whereas overlayfs just passes control to other fs.


What a wonderful idea. Thank you for sharing this tool


Maybe there is already something like this but this would be sick for sql DBs. Having to make prod updates to a table always terrifies me.


Just run the whole operation in a transaction and verify whatever you need to verify before committing.


BEGIN TRANSACTION;

... ; Modify DB

... ; Inspect State

ROLLBACK TRANSACTION;

and then switch the rollback to commit when satisfied


This is cool I probably will do this.


Just mind the isolation level of concurrent queries which may be running. By default you are probably fine since it's usually snapshot isolation mode (will only see committed results as of query start) but there are other modes that break this.


maybe you could run the updates in a transaction that you could roll back if you were unhappy with the result?


I'm afraid I will start trying things, and then I'll forget to type "try". Can you make everything automatically try?


A sandbox would have to stop network access, otherwise the program could spread to another machine or leak your data.


The container + temp overlay approach is interesting but this could more simply use filesystem transactions, if we had them.


This looks something like transactions for devops, which would be great. It could make for safer deployments in the future.


I thought this was going to be about structuring code to eliminate the use of try/catch statements.


How does this work if you run a database insert query from the command line? Or to a remote database?


I wonder if it can predit the cluster change of my `kubectl` command. Gotta parse those messy .yaml


Yoda had no nuance.

And btw Yoda, the elves of Rivendell lived way past 900 and looked better than you, so nyeh


Why are we needlessly illumating all those console background pixels to the max?

Anyway, interesting project!


Backlit LCDs


Or simply use a filesystem that allows snapshots.


Does it work with SELinux? Asking for a friend. ;-)


What’s the catch?


Would love to have this on Apple silicon Macs


If you don't mind ditching Macos, you can get it natively on Apple silicon mac by running Linux on it. https://asahilinux.org/


Still trying to figure out how to use github


i really tried to guess what this would be, and couldn’t. laughed out loud, this is a great idea


> i really tried

Well that was your first mistake!


Really really cool!


anybody remember Norton Cleansweep ? :)


Cool tech.

As an aside, I HATE the saying "Do, or do not. There is no try." Perhaps the best response I ever heard to this dribble was in the miniseries where piece-of-human-garbage Elizabeth Holmes is played by Amanda Seyfried in "The Dropout".

Professor Dr. Phyllis Gardner, played by Laurie Metcalf, responds, "That's all science is: trying."

Have fun in prison, Liz!


Thought this was the parody like the line from the Rock




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: