No scp, no quoting hell. Obviously there is other ways of doing it. And also if you run it with the base64 printed out it's possible to pass along and the user can still see what will be run.
I got good enough on escaping to really give up on it, the worst cases are just not worth the effort.
ssh has "-n" which I think stops that. It's worked for me most of the time when I end up with that or a similar situation (the once or twice it didn't IIRC involved experiments with chaining ssh commands).
No, "-n" has the opposite effect to the one desired. "-n" is for when you have a server-side command that reads standard input and you want to prevent it from reading from the client side. "-n" cuts off the standard input so the server-side command reads empty input (/dev/null).
The desired effect is when you want the server-side script to read input from the client environment. For example so you can give answers to questions, or interact with a terminal program. That's broken by piping $script in as input, and it's also broken by "-n".
I don't see an obvious situation where that would fail. People pipe tar output, rsync, dd, etc, through ssh without issue. Stdin doesn't seem to have quoting issues, which is what you would expect.
The base64 in the parent post seems unneeded if you're passing through stdin/stdout.
I came here to say something like this is way easier. I have a script which does exactly this for an interactive SSH across a large number of hosts [0] (with -S, shell mode).
This is the method I ended up using for golem (https://github.com/robsheldon/golem), a tool I wrote for executing server documentation on remotes. Shell quoting was by far the hardest part to get right, and the base64 pipe was the only solution that correctly handled all forms of quoting embedded in the scripts.
I don't believe the base64 is really adding anything here. There aren't quoting or data corruption issues with data coming through stdin/stdout. If there were, all the various scripts that pipe tar, dd, rsync, and so on through ssh pipes would have uncovered that. Just piping the script to bash is enough.
The value added is that you’re not using stdin and can use it for something else.
Whether that’s actually useful depends on the use case. Personally I have an ssh wrapper script, somewhat similar (though not using base64), that fixes the quoting so that the argv passed to the wrapper directly corresponds to the argv of the command on the remote end. It’s meant for interactive use, so the program I’m trying to run could easily be something that reads from stdin, or even an interactive program like sudo or vim that expects stdin/stdout to be a tty. (To make those work, the -t option has to be passed to ssh.)
The equivalent would be of catting the script and piping that to the (remote) bash, but as part of the SSH arguments, not its stdin.
So:
ssh user@remotehost "$( cat script ) | bash"
Note that shell expansion of
$( cat script )
... occurs locally, not on the remote side. This means that any interpretable elements of the output (c|w)ould also be expanded locally, though I'm not quite sure what effects this might have.
That said, I'm not clear on exactly on what base64 adds here, though as "script" is directly interpreted and translated by base64, any local shell expansion is avoided.
There's a #UselessUseOfCat as well. Method could be simplified to:
This isn't solving the same problem as in the article, which is about inline one-liners.
When the commands already exist as a file, the problem has multiple solutions. Quoting via base64 is a clever one, but the result is actually the least convenient compromise of all: not only does the script have to pre-exist as a file, the transport of stdin is unavailable to the process executing the commands.
Other than that, it just makes it kinda obvious that it will not break in any scenario, period. It just feels better, make sense?
As someone wrote here avoiding the expansion issues is really nice.
It is also very easy to compress with xz, but hey, then you could just use xz to begin with! Well, I don't know how wierd binary data will effect stdout in all scenarios. So yet again I stay with safe. I know that you send tar and what not over ssh pipes, but I really don't trust bash that much.
Another nice way would be
ssh $server bash <<'EOF'
echo hello world
EOF
You could not use the stdin then sadly, but that I solvable.
rsync is smarter it doesn't make you transfer the file again if there are no changes. Also if you have keepAlive set in your ssh config running an rsync command followed by ssh to the same server should be instantaneous.
At $OLDJOB we used to use nfs extensively, and a common desire was to run some command on the system where the files actually resided, after navigating there (with cd in a local terminal).
To that end, we developed a script called "yonder", which let you write "yonder any unix command with awful quoting needs". If the filesystem was local, it just executed the command; if it was remote, it took care of all quoting needs, then executing a command like "ssh hostname yonder - ENCODED_MESS".
(A variant, autoyonder, could be used in any shell script by ". autoyonder", so that any shell script could transparently run "over yonder")
I think that in the end we used the algorithm of:
- target directory is "argv[-1]"
- concatenate all arguments with a space separating
- single-quoting everything, who cares if it's needed
- replacing single quotes with the 4-byte sequence '\''
the quoting method we learned from git. Earlier versions took advantage of how we had an 8-bit clean commandline but only needed to send 7-bit strings: we just set the high bit on everything, then stripped it off. The NUL between argv values becomes 0x80, the space within an argument becomes 0xa0, etc., with no special shell interpretation. Gross but it fit the bill.
I don't have access to the scripts anymore, but it worked really nicely for the time.
I wish this existed except for binaries inside Docker. A fat binary for Windows, macOS and Linux that isn’t really a binary at all, but fetched the thing you want from docker and just runs it. With the local file system.
And i wish Microsoft would implement this for the "az vm run-command invoke --command-id RunPowerShellScript ..." command for Windows Virtual Machines.
Here you have JSON quoting along with PowerShell and optionally CMD quoting.
I have a script for this so that I can use modern Linux tools on macOS buy without having to go down dependency hell trying to get all the versions correct.
The script uses its name to look for a corresponding dockerfile/container (which it auto builds if needed) and then mounts the file system inside to allow you to run it on one or mode files locally.
By using $OLDJOB are you saying at every old job you ever held this was the case or just the last one? Because if it's just the last one you should have used:
const OLDJOB
Not sure why you used a variable in a comment, maybe because this is hackernews and you want to resonate with other programmers?
People used to use shell variables like this a lot in (to name one place) alt.sysadmin.recovery, such that the FAQ even mentions it: http://www.faqs.org/faqs/sysadmin-recovery/ (section 1.3)... meaning people have done this for decades.
The key takeaway here is that the SSH client doesn't quote the arguments that constitute the command line when it transforms an argument vector to a string, which might be rather unexpected.
Why the "exec" SSHv2 request only takes a string and not a vector of strings is anybody's guess. Perhaps for Windows compatibility, perhaps because the data representation used for the SSHv2 protocols didn't have a variable-length array syntax (though what would've been wrong with a "string cmdline" where each arg is separated by a NUL?).
Shell's 'echo' and 'eval' have the same issue. They just dumbly concatenate their arguments with a space!
In https://www.oilshell.org, the simple_echo and simple_eval_builtin options make it so that echo and eval accept ONLY one argument. This means you have to use this style:
echo $x $y # illegal
echo "$x $y" # allowed since it's a single argument
write -- $x $y # legal and respect explicit separator
There's a separate "write" builtin that is more sane because we avoid changing the behavior of existing builtins, even when there is an option set.
This is also illegal when there's word splitting, but Oil doesn't have it (it turns on simple_word_eval [1] and simple_echo_builtin at the same time):
echo $x
This ssh behavior is probably of the same vintage. Since it's an external comand, the best we could do it wrap it with "myssh" or something similar, which doesn't have this pitfall.
In general, not for SSH specifically, I tend to close quoted bash strings instead of trying to figure out how many escapes I need. Especially useful for awk and sed.
I find it rather annoying to escape single quotes within single-quoted strings in the shell. Really, there are only two options, both of which are rather verbose:
$ echo 'A'\''A, B'"'"'B'
A'A, B'B
I suppose that there's a few different commands that would work for your example:
$ echo "One Two Three" | awk '{print "'"'"'"$2" here'"'"'"}' # original
$ echo "One Two Three" | awk {print\ \"\'\"\$2\"\ here\'\"}
$ echo "One Two Three" | awk '{print "'\''"$2" here'\''"}'
$ echo "One Two Three" | awk "{print \"'\"\$2\" here'\"}"
To be fair, though, the argument {print "'"$2" here'"} looks pretty weird if you're not used to working with awk.
My advice is not to try, because the blast radius of getting it wrong can be large, the most hazardous form being a quoted sudo bash -c on the other side.
For any one-liner of sufficient complexity such that the quoting matters, I’ll instead invoke the shell remotely and pipe command(s) to it over the transport of stdin.
Special case: when data must be read from stdin, I’ll scp the line to a mktemp’d file and invoke that.
This is a common problem I see a lot at my current job and the only advice I can give you is just say no. If you must use something akin to system() there are tools bash provides like:
printf -v quoted %q "$my_shell_crud"
Or
printf %s "${my_shell_crud@Q}"
This problem exists in other unfortunate and unlikely places like:
Good idea, but there's a history of user accounts with disabled or restricted shells for various reasons. And execve would bypass that. There's no real pattern/standard for disabled shells, so a fix would be tricky. I suppose you could at least support execve of the login shell plus the command line, but that's not what people are really looking for.
Perhaps it's a case of using the wrong tool for the job? A shell is just a simple UI to easily run commands. When you start using it to run commands within Bash, within SSH, within remote Bash, then maybe it's time to use a different tool.
That's what they're saying, though? They want to stop running commands through the shell, and start running them directly, with exec() family of functions. SSH's protocol currently makes that impossible.
Sometimes, I need to run a command on a remote machine, and SSH is basically the protocol/tool for that. But I don't have a choice, but to involve the shell on the remote. (There might not be a shell locally: I've done SSH with libraries with Python. But you still have to escape for the remote's shell, and that is what makes this annoying.)
My opinion is that this is a protocol bug. At the very least, there should have been an option for "just transmit this list of args to the other side & exec it", at least in addition to "run this through whatever shell there is over there"…
Such a feature would make so many things easier to script.
With all respect to several of the clever methods shown here, sometimes brute simplicity is the better part of creativity.
For any sufficiently complex sequence of commands, I'll create a script and either place that on a universally-accessible mountpoint (e.g., where NFS mounts are used), or scp that to the target host as part of execution. (This is effectively how numerous configuration-management engines operate, as with chef, puppet, and the like.)
1. Quoting is a non-issue.
2. Parameters to the command being a possible exception.
3. The process is reproducible.
4. The command(s) can be updated / iteratively developed, if necessary.
Looks like the article stops just short of an example that requires a nasty amount of escape sequences and single quotes.
If I need to do anything more complex than what the author is discussing I just reach for Python’s shlex.quote() function. The output gets rather hard to follow, but the code is much simpler.
I had to set up a script that SSHes into a host, then starts a screen session, then does some grepping/filtering on the remote output. Shlex made it possible. I couldn’t have come up with the arcane quoting without entering the maw of insanity!
Other options that are perhaps faster to use on the fly, are bash/zsh's builtin printf and GNU's printf command, with the %q conversion specifier (modulo the newline/trailing space issue).
I run my computing environment on a remote box, to which I `mosh` to and run `tmux` on from a local MacOS box. Anything commandline (including Emacs) runs remotely, and anything graphical (including web browser) runs locally.
Sometimes I need to copy text from the remote Emacs session to the local MacOS clipboard in a cleaner way than using MacOS select-and-copy. I ran into the quoting issue because
'
is needed before and after the text being sent to quote it for the shell, but then that character can't be sent in said text, breaking the entire pipe. I was stumped for hours before finding a solution by trial and error. I think the comment says it all:
; single quotes necessary to escape
; argument for shell
" -c '"
(replace-regexp-in-string
"'"
;;; This can't be the simplest way to do this
"'\\''\\'\\'''\\''"
(I see from isodude's comment that the above indeed isn't the simplest way to do this, and should try his solution sometime.)
The complete function follows for the curious (and for those who can suggest improvements):
;;; BOS
;;; my-clipboard-kill-ring-save - Send any text to remote clipboard
(defun my-clipboard-kill-ring-save (beg end)
"Copy region to kill ring with `kill-ring-save', and send to
remote clipboard."
(interactive "r")
(kill-ring-save beg end)
(condition-case nil
(let ((my-clip (current-kill 0 t)))
(unless (string-equal "" my-clip)
(shell-command-to-string ; send it to remote clipboard
; Calls my shell script that takes argument and, on local machine, either
; a) opens it as URL, or b) with the -c option, sends it to local clipboard
(concat browse-url-generic-program
; single quotes necessary to escape
; argument for shell
" -c '"
(replace-regexp-in-string
"'"
;;; This can't be the simplest way to do this
"'\\''\\'\\'''\\''"
my-clip t t)
"'")) ; closing of argument
(message "Clip \"%s\" sent to remote clipboard." my-clip)))
(error (message "No clip sent to remote clipboard"))))
(global-set-key (kbd "M-W") 'my-clipboard-kill-ring-save)
;;; EOS
I have found that writing a function and then using `typeset -f` is a reliable way to pass code over ssh. It is also nice because the function call provides a clear boundary indicating which variables come from the source/configuring system and which come from the remote/executing system.
I had similar issues with rsync, ended up using some atrocious form of escape quotation with SSH to even get rsync to do its job. Don't get me started on the subject of why Synology still distributes such an old version of rsync on their appliances.
cmd() { for var in "$@"; do printf "'%s' " "$(printf %s "$var" | sed "s/'/'\\\\''/g")"; done | sed 's/ $//'; }
myeval() { eval "$(cmd "$@")"; }
`cmd` is by far the most useful of the two. It returns the arguments it was given, properly escaped. You can use it to capture the arguments as a string and then pass them to ssh.
# The following captures the arguments to the running script as a string.
CMD="$(cmd "$@")"
# Another example, could be putting some command into a string.
CMD="$(cmd echo "This will run on the server")"
# Then you run ssh as follows.
ssh remotehost "$CMD"
# Or something like this.
ssh remotehost "$(cmd echo "This will run on the server")"
The real power is in nesting!
ssh remotehost "$(cmd echo "This will run on the server" "$(cmd "Now I don't need" "to worry" "$(cmd About "escaping escapes")")")"
That will generate these escapes.
'ssh' 'remotehost' ''\''echo'\'' '\''This will run on the server'\'' '\'''\''\'\'''\''Now I don'\''\'\'''\''\'\''\'\'''\'''\''\'\'''\''t need'\''\'\'''\'' '\''\'\'''\''to worry'\''\'\'''\'' '\''\'\'''\'''\''\'\'''\''\'\''\'\'''\'''\''\'\'''\''About'\''\'\'''\''\'\''\'\'''\'''\''\'\'''\'' '\''\'\'''\''\'\''\'\'''\'''\''\'\'''\''escaping escapes'\''\'\'''\''\'\''\'\'''\'''\''\'\'''\'''\''\'\'''\'''\'''
Why is this downvoted? What's the problem with this solution? It looks like the same kind of thing as Python's shlex.quote, which should work and about which the manual[1] says:
"The returned value is a string that can safely be used as one token in a shell command line, for cases where you cannot use a list."
openssh for the longest time (maybe even now) also did not respect "--" to mean "end of command line options", which also complicates any attempts to sanitize input for it.
The system call `exec` [1] (that is used to execute any program, often in combination with `fork`) takes the command and its arguments as parameters. The space is the way humans (at times through scripts) communicate with their shell (that does the fork-exec dance for them) to let it know what exactly to put into these arguments. But when programs like ssh invoke `exec` they just pass command and arguments as strings, with no separator involved.
Maybe it's just nitpicking, but since the article attempts at going into the details of what's going on it might be important.
No. What it says, is correct. All of the arguments passed to SSH are joined together using a single space. This single string is then interpreted by the remote side through 'sh -c "${the entire string}"'.
> I saw the `sleep 100` process as direct child of `sshd`.
Keep in mind that shells like Bash do an implicit 'exec' in case the end of the script is detected, no traps are set, etc. etc. etc.
> How do you determine this?
$ ssh floeper echo foo '&&' echo bar
foo
bar
Notice how this should have printed "foo && echo bar" if this was passed to execve() directly.
> Also, in the man page for ssh it says:
> > If a command is specified, it is executed on the remote host instead of a login shell.
> which sounds to me like a shell is not involved in this case.
The emphasis on that sentence from the man page should be on login shell. SSH always spawns a shell, regardless of whether a command is provided or not. It's just that it's not a login shell if a command is given. Your shellrc file won't be run.
> Oh hi, my dubious life choices have been such that this is my specialist subject!
Interesting humblebrag. Other than needing to learn command line what is the author referring to here? Learning the order of operation in a commonly used command line isn't something to be so proud of.
I got good enough on escaping to really give up on it, the worst cases are just not worth the effort.