$ curl -s https://raw.githubusercontent.com/hannob/bashcheck/master/bashcheck | bash
Not vulnerable to CVE-2014-6271 (original shellshock)
Not vulnerable to CVE-2014-7169 (taviso bug)
bash: line 18: 14885 Segmentation fault: 11 bash -c "true $(printf '<<EOF %.0s' {1..79})" 2> /dev/null
Vulnerable to CVE-2014-7186 (redir_stack bug)
Test for CVE-2014-7187 not reliable without address sanitizer
Variable function parser inactive, likely safe from unknown parser bugs
Why does piping commands in to bash from the internet take on a magical security significance for people who run software they haven't audited literally all the time?
You don't see the difference between running an exploit to check if you're vulnerable to a security bug, and running software provided by your upstream distribution?
No, I don't see why people who run open source code from a variety of sources think that there's something magically insecure about downloading it and executing it using curl and bash.
This is a good question to think about, especially if it makes you consider more carefully about what software you run!
But there's a lot of spectrum between "i have audited this software" and "i'm just piping this random url off the web to my shell". Many software distribution mechanisms provide varying degrees of curation, reputation, accountability etc.
Also in this case the link was provided in a context that would select for people who are concerned about security updates (= looking after high value systems) and still somewhat naive about security practices, and could get run on systems that have strict rules about installing software on them.
Perhaps the parent commenter doesn't run software without audit. There's no reason to presume defeat -- It's wholly possible to run a modern system using only signed builds from trusted sources and compile-from-source from everything else.
Certainly, one can't audit every line, but just as certainly, one can establish a train of trusted sources and audit the rest (like this tiny snippet of bash, which is so utterly trivial to verify you might as well polish the habit on it).
I'm in the same boat. Though I see the argument that I'm running code I haven't audited all the time, there's something about a bash script in particular that gives me worry. I don't know how rational this is, but I feel like the author could switch the code momentarily to something exploitive, wait for N downloads, then switch it back. I'm not sure how you could gather evidence that this even happened, since the nature of `curl -s URL | bash` is so transient.
You're better off using any text editor - cat will display ANSI terminal codes unescaped in many situations, which can inhibit your ability to see code that might otherwise be hidden by # ^[[<wipe to beginning of line> or somesuch appended to a malicious bit.
Well since you went for a technical argument... if you really want to get technical.. it doesn't display anything and just sends the raw bytes to a stream.
It doesn't 'display' all bytes, because not all bytes are displayable. hexdump would do a better job there.
When you're looking at the page on Github, type "y" and it'll change the URL to the appropriate commit so you have a reasonable expectation that what you reviewed is what you're running:
We could at least reference a known good version by hash. This is at least a little more secure. But considering that Github could have been compromised by this bug, I would probably not pipe much from them until I know more about them not being compromised. Isn't running a jailed git server over ssh one of the great exploits of this bug?
Still potentially vulnerable to truncation, though I don't know that that could actually cause a problem with this payload. And of course, habits like this could quickly propagate infection if a trusted source is compromised (though there are certainly plenty of other habits that share that characteristic).
"Variable function parser inactive" -- interesting, did they apply the unofficial patch to namespace-prefix function definitions? This poster[0] seems to not have it, though. Who is right? The test at that github link seems a bit sketchy, by using a simple name like "a" instead of, say, __test_bashbug_a, and not checking the output very thoroughly. But it seems like it would fail the other way if there's a command named "a" in OS X's PATH...
It isn't unoffical anymore. For bash3.2, which seems to be what apple is distributing, that patch is ftp://ftp.gnu.org/pub/gnu/bash/bash-3.2-patches/bash32-054
~$ echo $BASH_VERSION
4.3.25(1)-release
~$ curl -s https://raw.githubusercontent.com/hannob/bashcheck/master/bashcheck | bash
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
Not vulnerable to CVE-2014-6271 (original shellshock)
Vulnerable to CVE-2014-7169 (taviso bug)
bash: line 18: 1339 Segmentation fault: 11 bash -c "true $(printf '<<EOF %.0s' {1..79})" 2> /dev/null
Vulnerable to CVE-2014-7186 (redir_stack bug)
Test for CVE-2014-7187 not reliable without address sanitizer
Variable function parser still active, likely vulnerable to yet unknown parser bugs like CVE-2014-6277 (lcamtuf bug)
And fwiw even before install new bash via homebrew I had also followed these instructions:
Also reproduced on 10.9.5 -- you would think that there's sometone at Apple looking at this forum and at hannob's bashcheck and that it would get tested. I guess there will be a 1.1 version of the update soon?