Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fun with Shellshock (regehr.org)
144 points by luu on Oct 12, 2014 | hide | past | favorite | 21 comments


tl;dr: This software immediately recognized Shellshock for what it was, modified live processes to protect itself, then wrote a patch and re-compiled bash in a few minutes... all with a single malicious request.

This sounds like science fiction. I love it.


I have to be honest; at first I genuinely thought it was science fiction. It sounded like a William Gibson novel.


Note that it disabled the exploit instead of fixing it, which still is a remarkable feat. I wonder if that could be exploited e.g. by exposing an light exploit from security feature you trick the software to disable a security feature that was stopping a more dangerous exploit, a la The Prefect by Alastair Reynolds.


While this is cool, if it can block the attacks, I think I'd rather leave it at that than have it guess at a possible fix and automatically recompile applications. That's definitely a cool experiment, but in real life I'd be way too nervous to allow software to make those calls on its own. If there was something it could subscribe to and receive human-vetted patches for a certain attack signature and then automatically recompile, that'd be pretty cool and feel a lot safer.


stop being afraid of the future, this is experimental technology, and one day it will be the present, and normal. a vulnerability is not fixed till it's patched, if you tried to access /etc/foo, A3 will recognized you don't have access and block it, but if you have access to /usr/bin then it won't block it, an attacker then can craft an exploit to only use what they know you have access to, say netcat in /usr/bin or wget to get a shell and carefully try to bypass your other security layers.


While I agree that the technology is impressive I personally don't want to wade through codebase full of machine-generated patches. Making sense of code when the original author is not available is bad enough, I don't want to know how bad it gets when there is no author.


I've got an old-ish SVN repo at work with a ~115k loc Angularjs app, which is the end result of some very poorly thought through decisions to "allow all merges". It "works", but it's now completely unmaintainable (and has been since rewritten from scratch and the lowest effort way to add new features). It's a real shame, because the original dev (who wasn't involved in the botched merge) was actually doing some really good work, but the business kinda let him down with angular training for other staff, pulled him off that project, then let it crash and burn while not explicitly blaming him but making it clear they didn't blame anyone else... Not our proudest moment...


This was the thing that immediately leapt out at me as well, though I imagine you wouldn't just leave The Machine to manage things forever more, you'd use it as a first responder and then analyse and patch the vulnerability properly.

The idea of a self-patching machine going on forever and building up cruft would make a good premise for a story though :)


The intention isn't that you would carry the patch forward long-term - it's a short-term, throwaway fix that is only intended to fix the problem whilst not breaking your application in your specific use case.

You'd still apply the proper patch from upstream when it's available and throw away the machine-generated one.


It somewhat reminds me of the things that genetic/evolutionary algorithms can produce; amazingly efficient and effective, yet nearly impossible to understand and reproduce systematically. I suppose the same might apply to "self-healing" code in the future.


assembly programmers that coded in C once said that, oh, i don't want to wade through codebase full of machine generated optimizations. i just want the compiler to keep it simple, and then we can optimize. Now, the average compiler will generate far more efficient code on a large project than a human can. Have you thought about the possibility that in the future, you will not have to wade through those codebase? The patch might be on a lower level where you never have to see it. Be imaginative.


It's kind of hard to believe we still have people like this here. People who're afraid of the future should be banned.


So impressive, although it's source code patches can likely disable needed functionality, and in the process allow a 0day to turn into a denial of service attack.


DoS is preferable to RCE though, in my opinion...


It does try to avoid breaking needed functionality by finding and disabling something unique to the bad request:

> testing system call block policies for any unique calls or call parameters found

Obviously it can only work with data it has so if your system has some rarely-exercised code paths it might break those, but at least it won't break the 99% use-cases?


I assume that's the point of adding bash regression tests into the mix, if the tests cover "needed functionality".


This is quaint but looks too much like "look how I solved this" (PS. give me funding) multiple weeks after the fact, and is definitely coming from such a bureaucratic environment. The overall approach leaves the critical question of service-specific security policy development out of scope.

We've had NSA SEL since 2000, the reason it's rarely used is that regular people don't have time to grok that level of detail. In response, we have things like grsec learning mode.

As with any time app-specific ACLs are brought out, the real question here is process, not technology. I would argue that what is really needed is not a kernel-specific solution or a protocol-specific solution (noticed how many of the new SOA infrastructure half-solutions are HTTP-only or port-level only regarding network policy?) but rather a generic, industry-wide approach toward multi-faceted security policy generation for arbitrary services including all aspects of service behavior (at both host and network levels). That in turn requires a virtualization paradigm, networking paradigm and OS neutral service devops process. This is something that we are moving towards slowly (eg. widespread git use, standardish build tools, containers, VLANs), but is not widely discussed.

We have the tools in major kernels already: syscall monitoring, relatively mature multi-subsystem security policies, network ACLs and monitoring systems, increasingly sophisticated networking virtualization solutions like Open vSwtich, filesystem-neutral monitoring. The pain in the ass is putting it all together in an average-app grokkable manner that doesn't demand comprehension of low-level OS internals from regular developers, nitpicking application behaviour comprehension from operations infrastructure, or the employment of security nerds to reap the benefits of some of the available lower-level technologies.

I fear commercial offerings will never take us to this position: it's simply not an easy sell (intangible, long term benefits vs. shorter overall timeframe and less cognitive overhead / requisite management grok for current and familiar development processes) and too complex to implement .. in most cases requiring a complete devops process change. Instead, I predict that we will slowly see open source devops tools layer upon RCS/VCS to provide a common continuous integration / deployment process that integrates effective multi-subsystem application profiling for security policy development in parallel to regression tests and other pre-deployment processes.

Further personal thoughts on this area @ http://stani.sh/walter/pfcts


I am one of the developers, so I can offer some small clarifications. While it was not clear from the writeup (and when it got posted) this experiment was run 2 days after the bug disclosure and well before patches started to stabilize. The post had to go through a review before we could publish it.

The A3 stack relies on syscall monitoring (via virtual machine introspection), network filters (which are protocol specific) and filesystem-neutral monitoring. Some of this stuff is readily comprehensible to a sysadmin, other stuff is not. Automatic application profiling is an area of ongoing work for the project.


This is very interesting work. Thanks for publishing it. One thought experiment for you (which perhaps you've discussed already): could an attacker potentially influence and predict the state of patched software on the target system, introducing vulnerabilities which did not exist prior to patching? Also along that line, have you attempted to fuzz the input fields in scenarios such as your shellshock example?


That thought experiment falls under the umbrella of adversarial machine learning, which is something that we are aware of but has not been a focus for us thus far. Getting the correct adaptation in the first place was the primary goal. To trigger adaptation/patching, an attacker needs to drive the protected application to an undesirable state (exploit it, in other words), so an insidious attack that predicted and triggered multiple patches in the name of creating some ultimate vulnerability is a pretty high bar to clear. I would not claim it is impossible, but I do not know under what conditions that path would ultimately be easiest for the attacker.

We have done some work with fuzzing malicious inputs to produce better network filters, but that work focused on integrating a 3rd party fuzzer: https://dist-systems.bbn.com/papers/2013/Automated%20Self-Ad...


Interesting how many negative votes this is getting with zero substantive response.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: