Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is awesome, but I am absolutely bloody terrified of using it on my systems.

As is stated in the README, if you write 'rm' or anything like it... oopsie oops.



Right, that's one problem. The author says:

> But you'd be careful writing "rm" anywhere in Linux anyway, no? Also, why would you want to pipe something into "rm"?

But the thing is that you don't need to intend to pipe to "rm". Maybe you were typing something else, like having a command "rmore" or something. This danger is also not strictly limited to `rm` and `dd`, and you have to be careful that no substrings from the start of the command you intend to write cannot be interpreted as something dangerous.


The author's theory there about user behavior seems dangerously wrong.

I'm quite careful writing command lines because the whole experience of the shell is that of working with sharp knives: you get a lot of power but if you screw up, you'll feel the pain. The point of this tool is to take away a lot of the pain that teaches people caution. Its whole theory is "just try a lot stuff as you explore what you want".

In their shoes I'd look at using some of the container/security magic as a way of nerfing commands. If the on-a-keypress runs work in a way where they can't make changes to a filesystem, that seems way better to me. Even better if the tool then reports, "Would have deleted 532 files in a warning color at the top of the output."


> I'm quite careful writing command lines

A safety measure I picked up from a sysadmin while watching over their shoulder: start writing nifty command lines by prefixing them with # first, to prevent havoc when fat fingering the [enter].


Yes, though I prefer: echo ...

The reason is that you'll see expanded variables and wildcards before commiting to them.


But prefixing with echo only disables the first command in a pipeline, while commenting it disables the whole thing.


It also executes command and process substitutions, opens files etc. But granted, it's a useful trick in some cases, after passing the fat-fingering stage.


You can move the echo to the part you are currently working on.


It's a pity you can't do something similar to SQL: when writing UPDATE/DELETE always write the WHERE clause first


Something I learned early on when writing possible dangerous SQL selects:

  CREATE TABLE mytable_backup AS SELECT * FROM mytable;
  SELECT * FROM mytable WHERE condition;
  DELETE FROM mytable WHERE condition;


Or just let the DB work for you with BEGIN; + (select/update/delete) + COMMIT; or ROLLBACK;


What I like about SQL is that it's got the double safety of ";". To accidentally run a SQL statement before it's ready in a command line, you'd have to both add the ";" AND hit Enter, or have the bad practice of adding ";" before the statement is ready, or use a bad SQL command line that sees the ";" as optional.


I always start a session with BEGIN, write UPDATEs/DELETEs as SELECTs first and (if possible) use a staging database until I'm fine with the result.


In SQL, you can at least wrap the whole thing in a transaction. Then, just roll the whole thing back if anything came out wrong.


I usually SELECT COUNT(*) WHERE ..., and only subsequently overwrite with DELETE.


Thanks for the tip, I'll definitely be using this. Fear of accidental rm -rf * keeps me up sometimes.


Looks like you need to know about the 'sleep' command...


unzip; strip; touch; finger; grep; mount; fsck; more; yes; fsck; fsck; umount; sleep


`fc` helps. Setting `FCEDIT=ed` helps a lot.


This command can be dangerous itself.

For those who don't know... "The fc utility lists or edits and reexecutes, commands previously entered to an interactive sh."


It seems like most of the scariness could go away if only you had to press a key combo each time you wanted to run the command / render the preview


Yes. Like Enter? People might still like the fact that the cursor doesn't need to move from wherever it's editing the command to run it, and can still continue to edit there.


That's the main direction I currently seem to be coalescing on based on feedback I'm getting. That said, one person on lobste.rs suggested trying to play with Linux capabilities, to make the filesystem immutable... now that would be a seriously fancy trick if it works!


Can I say that I hope you don't make this tool too Linux-specific. It seems to run pretty reasonably on Mac and would probably also work on BSD variants. Since I use all three (Mac, BSD, Linux) on a semi-regular basis, I'm pretty unlikely to integrate a tool into my Unix workflows if it doesn't work reliably on most unixes, so if you can look into cross-platform ways to accomplish things, that would widen the audience of people who could get value from the tool.

For example, one way to address the rm problem would be to give it a "safe mode" that drops user permissions and runs as nobody. You might also create a blacklist of common, potentially-dangerous unix commands that you don't execute. Both of those are examples of how to address the need without resorting to Linux-only tricks.


How can I do the "drop user permissions" thing? I haven't seen such a keyword mentioned by anyone yet, sounds interesting. Does this not require some kind of syscall, however, thus not really being cross-platform anyway?


‘nobody’ is an only UNIX thing. It’s just a user and group (like root, $USER, etc) except ‘nobody’ is meant to have no permissions to any folders nor files.

In practice it’s only as good as your users ability to avoid the temptation of ‘chmod 777’ (for example) but it seems a good place to start.

As an aside, I had the same idea to do a tool like this as well. Except mine would have been built into the $SHELL I’m writing (as that has a heavy focus on being more IDE-like than traditional shells). I scraped the plan for pipe previews precisely because of the dangers we’re discussing. However your tool makes a lot more sense because at least that is manually invoked whereas my original plan was to have that feature automatic and baked into the shell - which is an order of magnitude more dangerous. So I’m glad someone else has ran with this idea


:)

I've seen somebody mention the idea of applying this to a shell, but only after pressing some special key (e.g. Ctrl-Enter). Would probably make it un-dangerous enough? This seems to be what people want of up too, anyway.

As to the "nobody", from what I'm reading, it seems you'd first have to be root, to be able to switch to "nobody"... so this doesn't really seem to be useful to me in this case... :/


The other way to do it is to change ownership of the executable to nobody:nogroup and set the setuid/setgid bits.

Perhaps you could simply put those chown/chmod commands in the docs:

    sudo chown nobody:nogroup path_to_up
    sudo chmod ug+s path_to_up
I've tested it, and it seems to prevent deleting files with rm. What doesn't work, however, is that it also prevents writing the results to up1.sh. Perhaps if writing to the file fails (or you detect the process is running as nobody), you could send the finished pipe sequence to stdout instead of a shell script. Then, people could run it like:

    cmd | up > up1.sh


The solution there is not to set any writable bits for the up executable. Then only root will be able to write to it (which is ideally what you want for any tools within /usr/bin (whatever) anyway


That’s an interesting argument having it hot-key triggered. The shell in its current form already supports hit-keys so I could plug right into your tool verbatim that way. The only issue is you call fork shell so any $SHELL specific behaviours of my shell would be lost.

I appreciate this is a personal project and sometimes there is nothing more annoying than having feature requests; but if you did ever decide to add a flag for choosing alternative shells then drop me a message ( raise it as an issue on github.com/lmorg/murex ) and I’ll add ‘up’ as an optional 3rd party plug in.


i fear a blacklist won't be enough. better a user configurable whitelist. every time a command is not on the whitelist, don't run it, but give the user the option to add it to the list instead (with a multi key action, so that it can't be added to easily and by accident)

greetings, eMBee


Consider that the filesystem isn't necessarily the only thing you can mess up by accident. It's much more unlikely, but you could accidentally trigger a bunch of unwanted network requests (e.g. piping into `xargs curl` or something), among other possible things with external effects.


Maybe cutting off network access could be done too? That would reduce some potentially useful features, but it could be a parameter to enable it (hey, we're talking Linux, never enough parameters! ;). That is, assuming that a syscalls/capabilities trick like that is possible at all. (Anyone can say more here? Some kernel hackers? This is HN, ain't it?)


Yeah, I think almost any ability to produce an effect other than printing to stdout and stderr should probably be restricted by default, with a flag to enable it. If people adopt this and frequently use specific capabilities, they can add a shell alias to add enable their preferred set of capabilities.


Or, one possible alternative would be to always prevent any side-effects other than printing output, and then display a warning saying something like "This command tried to access the network/modify the filesystem/be naughty in some other way. To allow this and re-run the command, press Control+Shift+Enter"


capabilities should certainly work. for portability, setting a non-existant proxy should also help. most commandline tools should honor shell proxy variables and fail the request.

greetings, eMBee.


I think from a design perspective, it's better to build your sanity checks and safeguards into the UI instead of some fancy abstraction that's running in the background. Throw in some options to make it user configurable and you have a pretty slick solution that behaves in a way that'll feel familiar to gurus.

I think requiring a keypress by default to run any commands and maybe even throwing up a warning for a list of known dangerous ones (perhaps user configurable) would work well. A really fancy solution would be to do this stuff via some sort of linting.


I don't see how linting could work, I'm afraid. There are obscure options in zip, git, grep, and what else, not to mention full-blown interpreters like awk and perl, that can edit files via a myriad of syntaxes, and then there's the halting problem. Similarly with whitelist/blacklist, I'm not currently really convinced it could help much; take `xargs rm`, vs. some hyphotetical `foobar -rm` command + params (where maybe -r = recursive, -m = monitor?) Keypress, on the other hand, sounds to me like it could be a reasonable default compromise... still with an option to switch to "fully accelerated, fast mowing" mode at any point, if one so desires...


> `foobar -rm` command + params (where maybe -r = recursive, -m = monitor?)

An example of such a foobar with those exact options would be `inotifywait`. :)


Agreed. As another commenter said KISS.


Too fancy in my opinion. Personally, I prefer KISS solutions. Having an immutable filesystem in up would be unexpected, and I'm sure there's other destructive things that can be done that don't involve the filesystem. As odd as it may seem, I may also want to be able to do filesystem destructive things in up, why shouldn't I be able too?

EDIT: As an example, I may exploratively be playing with find options to match different sets of files in a directory hierarchy. Once I see I've matched the files I want, I may want to add a `-delete` to the end of the find conditions I specified to delete them all. That seems like a useful use of up, but making the filesystem immutable would disallow it.


Hm. I think in my ideal vision, you'd then quit up, and somehow magically have the pipeline ready at your fingertips on the shell prompt, so that you can just add the `-delete` there in a "regular" fashion, and press Enter. Not quite sure how this could be done with bash easily. I think the upN.sh file is not a bad approximation however. (Someone on reddit (I think) seemed to mention that zsh seems to have some functionality which could allow building something like that over up.) Interestingly, this seems to me to support the idea that limiting syscalls/capabilities could be a good approach (if at all possible, that is). Potentially maybe also cutting off network access? But even if I add some safety options, even if I make them the defaults, I'll try really hard to keep an option available for people who want to play with it raw, with no safety belts.

edit: I totally get you as far as KISS goes; that's why I built and shared the tool as is in the first place ;)


> Interestingly, this seems to me to support the idea that limiting syscalls/capabilities could be a good approach (if at all possible, that is). Potentially maybe also cutting off network access?

Yeah, I still think it's not a good approach. rcthompson gave an example with network access, but that doesn't mean that bad things can only happen with filesystems or network access. You can also kill processes, shutdown the computer, and many more things. I don't think you can guess what mistakes the user might do, or which were really mistakes or were intentional.

Also, if you can avoid special permissions or capabilities for your core functionality, then that's better. People should be conservative in giving special permissions to programs, and I can't think it makes sense to give "up" the ability to make a filesystem immutable even if it's in an isolated namespace.


Hmmm; so what would you think of the other popular suggestion, to add a (probably default) mode, where only after pressing, say, Ctrl-Enter, the command would be executed? With another shortcut allowing to switch back and forth to the "no seatbelts, confident, fast moving, fast mowing hacker" mode?


I would just do the Enter thing (I can't think of a reason to prefer Ctrl-Enter), and forget about the other mode. If you really want to do that other mode with a super-restricted environment, just have the code prepared to be denied permission (to setup that environment) and not assume that it has been granted. There will be users that won't feel comfortable adding those permissions/capabilities to "up".


you are building up a pipeline, you don't want it to be destructive until you are done.

so at the very end you could apply a special action that now runs the resulting command with safety disabled.

or toggle between safe and unsafe mode at will, with safe mode being default.

it's not necessary to have the whole up utility run with filesystem disabled. but it could apply the restriction only to the pipeline its executing. that would allow it to selectively apply the restriction as requested by the user.

greetings, eMBee.


Another solution is to only allow commands from a whitelist to be run. If the user types something not on the list you say "rm is not on the list of allowed commands, press enter to run and add it".


++

And rather a blacklist, since a blacklist would be much smaller than a potentially an infinite amount of programs to manage for a whitelist.


I think the whitelist makes more sense, since you can never say for sure that you've added every potentially harmful program to the blacklist. If it's easy to add things to the whitelist, as the parent comment proposed, I think the idea would work very well, especially considering most people have a relatively small number of commands they pipe other commands into.

Btw, if you want to see all the commands you've ever piped data into, sorted by frequency, you can use this command (which I put together using `up`!)

    history | grep -o '| \w\w*' | sort | uniq -c | sort -n | nl


Another thing you might want to investigate (for Linux, not sure about cross-platform alternatives) is OverlayFS[1], which allows you to create union filesystems where some layers are read-only and the top layer is read-write. This can allow destructive operations like rm without actually removing anything from the lower (read-only) layers, using whiteouts.

I'd have replied to this on lobste.rs but don't have an account. I'm not a heavy poster but I'd love an invite if anyone has one going :)

1 - https://www.kernel.org/doc/Documentation/filesystems/overlay...


I need your email for an invitation; if you can send me your address via my own email (see my profile), or via keybase, I'll send you an invite. I don't see your email advertised in your profile here on HN, so my hands are tied for now :)


Thank you very much! I hadn't realised my email address isn't visible here. Maybe I need to move it into the "about" section.

In case you aren't aware, your email address isn't visible in your profile either. I've emailed the GMail address that I found on your website :)


Eshell can be setup to do this. Is neat for small commands. I turned it off, though.


Can you elaborate a bit more? Which functionality are you referring to? And why did you disable it? Really curious! (Author of up here) I get it that by eshell you mean a shell in Emacs? (That's at least what I'm getting as a first result from google)


Indeed, I meant the emacs shell. https://www.masteringemacs.org/article/complete-guide-master... has a good overview in the section regarding the Plan 9 Smart Shell.

I ultimately just didn't find I used it much. If i am iterating, org-mode with shell sources fits a bit more natural for me.

Edit: and to be clear, I meant specifically to have the output below the command with the command still in edit mode. Was not trying to say it is the same as up.


Well, that's only completely horrific. Cool idea, but should definitely be limited to a fixed list of commands or something. Even then, some stuff that might be handy like xargs could be quite dangerous...


> But you'd be careful writing "rm" anywhere in Linux anyway, no? Also, why would you want to pipe something into "rm"?

Not exactly a pipe into `rm`, but `foo | xargs rm` is a common enough pattern.


It’s also dangerous if your filenames have spaces in them.


I was thinking about similar tool to this one and this was the biggest obstacle I could think of. Also write touch and create a file for each keystroke.

To become something more than a hack you have a few choices:

- a blacklist - will never be enough, still arguments may be problematic

- a whitelist - will always be not enough, arguments may be problematic, but a bit less than with a blacklist

- limiting permissions somehow - tricky

The last and best option IMHO would be to wrap it in a sandbox, where all filesystem access is behind an overlay (i.e. mount namespaces and overlayfs). This way if you are satisfied you can apply changes (if there is anything to change). The overlay would be removed and created on each keypress. It may be also possible to wrap process access, so one could safely play with kill, but I'm not sure. Even network settings to some extent. But there is nothing you can do with curl -X POST or something similar.


Hmm, is there a straightforward way to be a bit more careful without detracting from the usefulness? What about a default blacklist of commands with the ability to override it through a config file?


I don't think a blacklist could possibly be comprehensive enough. I think you'd have to use some OS permission-limiting system to prevent it and any subprocesses it spawns from have any write access to the filesystem.


I think the tool would be much more useful with a whitelist. Do this only for grep, awk, sed and other similar tools.

Of course, much more thought is needed to try something like this. Somebody could as well use awk with its system command to do whatever..


Even an incomplete blacklist might be helpful, just in the interest of keeping perfect from being the enemy of good.


Author here: I hope something like that (syscall/capabilities limits) could work. If it is possible, it would just mostly solve the problem, I believe. I'm kinda starting to realize, that probably any command modifying some external state is potentially somewhat risky already, by potentially spinning some exponential feedback loop. (One person on lobste.rs mentioned that foo.bak.bak.bak.bak files could easily get created.) Regardless, I'm generally considering adding a shortcut/option to pause/unpause, and only execute on Ctrl-Enter when paused.


Another possibly reasonable option would be to create a (configurable) whitelist of commands that are considered safe, and keep running the pipeline automatically as long as it only contains whitelisted commands. Any time a non-whitelisted command is introduced, stop auto-running and require Ctrl+Enter or something, until the command once again consists of only whitelisted commands.

This would save you, for example, if you had a custom command called "gr" which was short for "get rid of current directory" (obviously chosen as a pathological example since it's a prefix of grep). As you type the word "grep", auto-running is paused because "g", "gr", and "gre" are not whitelisted, and then once "grep" is fully typed, it recognizes that "grep" is on the whitelist and resumes auto-running. And it never ran the dangerous "gr" command.


Or perhaps use filesystem snapshots as an undo option ... if your fs supports them, of course.


Pretty much. Is rm blacklisted? OK. How about bash -c "rm"? cp? mv? vim?

...? :D


Yah, a whitelist of commands which includes bash would probably be best. You'd be fine using it and can simply switch to chainsaw mode by adding bash to the command


If it could be activated only by hotkey, that would be helpful.


Yes, maybe similar to the way tab completion works.


I hate to jump straight to Docker, but that seems to be a quick way to restrict access to the local file system. This of course limits utility, but would be much safer. Plus I think the usefulness of a tool like up is primarily in munging the input text anyway.


Docker is an obviously bad solution to this. If you can run a docker file you defacto have root on that computer ^1.

Firejail could do exactly the same thing, but without requiring the user running it download an entire second operating system, or requiring them to have root. Also, the sandboxing mechanisms that docker uses are just generally available and aren't hard to use, so if they went that way they may as well just use the actual syscalls that do what they want instead of importing and entire other operating system to run your commands.

This is where my rant about docker, and the habits it encourages, would go. If I could figure out a way to phrase it politely.

1: https://github.com/moby/moby/issues/9976


Wow, firejail seems super interesting, thanks a lot for the idea and mention! I'm not sure if I'll manage to use it, but certainly a good direction for some further research!

https://firejail.wordpress.com/


Docker containers don't need to be (and often aren't) "entire operating systems." Good point about it requiring root, though.


The problem I was suggesting could be solved Docker wasn't with the privileges of up itself, but the problem of commands you write within up being potentially destructive. I didn't say I thought Docker was a good solution.


You can use unshare to create a read-only view of the file system, without going all the way to containers.[0]

[0]: https://gist.github.com/cocagne/4088467


This should be the default mode, and you could activate it with a --rw flag


Thanks a lot for the link! I'll totally try to look into this. If it really proves to be as easy as a single syscall... Oh, wow, now that'd be a killer...


Think a whitelist would be much more appropriate for avoiding potentially harmful commands.


How about not evaluating input until user hits a key, e.g. tab? Then any blame for rm lies squarely with the user.


That's how bash works by default. This tool would be of no use if that's how it worked.


Bash does the autocompletion, but not evaluation and display of the output, does it not?


He was referring to “hits a key” —> [enter]


A potential solution is to create a user who has no or limited write access, and modify up so that it always switches to that user.

I wonder if there’s a way to do this without requiring the creation of a new system user. Some way to revoke all write access for the current process.


Capabilities do this. It's the same mechanic used by containers to restrict their access. See man capabilities(7)


Seems to be linux only at first blush.


I'm ok with focusing on Linux as the prime target for up. Though I'm totally trying to think about cross-platform approaches too, obviously.


FreeBSD has a capabilities system called “Capsicum”.

https://www.freebsd.org/cgi/man.cgi?capsicum(4)

https://wiki.freebsd.org/Capsicum

https://www.cl.cam.ac.uk/research/security/capsicum/freebsd....

Capsicum is convoluted though.

OpenBSD has pledge and unveil, which from what I have seen are very elegant.

https://man.openbsd.org/pledge.2

https://man.openbsd.org/unveil


It does occur to me that if this did system-call redirection and banned the unlink (and maybe rename?) syscalls from working in it's executed commands, you could get a fair degree of safety.


Is it possible to do system call redirection??


I don't know of a way to redirect syscalls, but they can be limited.

https://www.kernel.org/doc/Documentation/prctl/seccomp_filte...


I think most of the useful commands ought to be able to be run in a namespace where they can’t do much. Eg if they can’t see any files then they can’t delete them. Unfortunately they wouldn’t then be able to read config files. Eg I would expect grep looks for a config file to decide what colour to highlight matches and so on.

Perhaps one could run them as a fresh user with few permissions, except it could still write to files if they are writable by “other”


Is it possible to configure namespaces so that they allow read-only access?


i'd be ok with manually configuring up to tell it to copy certain config files to the sandbox so they are available for the commands to run there.

there should not be to many commands that need configuring.

greetings, eMBee.


A solution is to have the tool run first in a container or sandbox where any changes can be rolled back ("test mode").

Once the command looks promising on the test container, run the command again in a fresh container to confirm, and maybe see a list of affected files. Finally run on the main filesystem outside of any test container.

Edit: similar ideas with family snapshots already suggested elsewhere in the thread


Maybe a whitelist of commands it can run?


A whitelist would make it far less useful. A blacklist would still make it possible to execute dangerous commands.

The only solution that I would find acceptable would be not to execute incomplete commands as they're typed.


One could sandbox the whole thing with an tmpfs+overlayfs to avoid such accidents.


One possible way to deal with some of those cases would be to intercept those commands and instead display something about what would happen e.g. `deletes 42 files`


I feel like you could replace this tool with a hotkey that runs the current command without clearing the line, sort of ”what if I ran this as is”


Wouldn't it be possible to limit that by cgroups/namespaces? Possibly remounting everything as read-only.


Yes this is an amazingly dangerous and stupid idea. Sure it looks cool but so does a Lamborghini until it catches fire or you crash it into a tree at 150mph.

Giving a user an opportunity to correct mistakes before they kill themselves is a major feature of Unix. Sure when you’ve decided you know what you’re talking about but don’t it will quite obediently shoot your face off. We don’t want to make that bit easier though.

Edit: please don’t take this as derision but more factual. It certainly looks cool and opens the idea for further discussion.


>Giving a user an opportunity to correct mistakes before they kill themselves is a major feature of Unix.

That's the complete opposite philosphy of Unix. That's, in fact, MIT/LISP philosophy.

Unix' Motto is DIE fast.


Actually it’s more RTFM before you drive the car. A powerful skill many have forgotten which is supplanted by autonomous tools to make tripping over claymores easier.


Dangerous, yes. Stupid, though? No. It’s a useful tool and a good idea that merely needs further refinement.


Stupid is too harsh, and I don't want to discourage anyone from contributing to open source, but I'm not sure this is all that useful either (at least not to me). Is the utility in not having to press Enter because this'll do it for me between every character? I can only think that the reason people like it so much is because it looks cool or most people don't know how to use command line editing keybindings effectively and like that they won't have to hit up arrow and move slowly across the command to edit. The fact that it doesn't have readline support, timestamped history support, or output the output of the last command to stdout at the end already seem like feature losses when compared to using a shell normally.


At least one neat benefit I can imagine is not flooding your history with 20 minor edits to a regex in a long pipeline, and not flooding your stdout with garbage from a bad regex

Inversion of the output (viewing the top, instead of the bottom, without scrolling on longer outputs) is also of note

Imagine if this were simply the shell, with all its normal editing facilities, but laid out in this fashion. It seems obvious to me theres a usefulness anytime you’re trying to create a longer pipeline.. and doing the same thing the shell normally, but leaving a mess behind

Your list of issues are all resolvable (and probably trivially so), and thus not really relevant to the question of whether this thing is worth having in the first place (which requires more fundamental questioning). IE the issue with rm executing when you’re typing in ‘rmore’ is absolutely fatal in the current design and makes it unusable. That can’t be fixed without a bunch of hacks, or losing one of its main features (swapping to enter-to-run is more than fine imo). With such a fatal flaw, it could be fair to call it stupid and useless.

But for lacking readline support on first public listing..?


I couldn't find a good readline library in pure Go, that would also interact well with the tcell package I am using for the TUI. This annoyed me really hard, but eventually I said, screw that, and went with the minimum I had to write to make an input box usable at all :)


I disagree. The example in the GIF is something I've done many times, usually with a bunch of temporary files. Where the commands are side-effect free and relatively cheap, but the specifics are fiddly, this looks to be really useful. But not universally applicable, no!


I think at the point something like this becomes useful though, I've usually just dropped into writing a Python script - which more often then not is the right choice because there's a high probability I'm going to need to do more things soon anyway if it's getting that complicated.


Exactly




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: