"Sysdig Falco is a behavioral activity monitor designed to detect anomalous activity in your applications. Powered by sysdig’s system call capture infrastructure, falco lets you continuously monitor and detect container, application, host, and network activity... all in one place, from one source of data, with one set of rules."
"The grey fox is a kernel extension for Mac OS X that logs performed system calls by any process running on the system. Research for my master thesis required a dataset of all system calls performed by benign as well as malicious processes (malware). After analysis of the gathered datasets, several system call patterns that identified malware were extracted. "
(disclaimer: I was one of the three professors on the msc thesis committee for this work)
Related thoughts specifically concerning ransomware (controlling write access):
Operating systems with a GUI typically have file associations bound to executables. One could use that knowledge to restrict write access (or even read access), or at least use this fact; "application with no file associations does file I/O." Excluding known system executables, etc.
So if applied more strictly, an application that wasn't associated with .JPG or .JPEG couldn't write to those files, or perhaps not even read from them.
As for restricting read access, my first thought goes to browsers that may not be associated with every file type you want to upload/attach, but that could be a dialog between you and the system as it happens (allow it once/now/session, for N days, forever, etc).
That's tough to do if the application in question is a script engine or is merely script-enabled like Excel or Word.
One ad-hoc thing you could do to beat some of the ransomware out there is remove or break things like vssadmin.exe so the ransomware can't prevent you from reverting to a previous version of a file. I don't know if mac has something analogous to this, though.
Here's my tl;dr. First I'm quoting the conclusion:
> Is it possible to detect malicious behaviour performed by malware, based on monitoring system calls?
> This work shows the answer to that research question is: Yes.
The paper is really good, Vincent Van Mieghem from Fox-IT (NCC Group) tried to load a kernel module (bypassing ASLR and other protections) to log all the system calls made by a program when executed for the first time. I'm not sure if he does that in a sandbox/VM and it looks like he might just be trying to detect a virus while it's contaminating the box.
The point is that most malwares use the same kind of system calls. If they want persistency, they will use LaunchDaemons, LaunchAgents (which in their turn use launchctl), they will often use root provileges, as well as spawn sh or bash to run some tools, often they would also package and run LogKext (THE opensource keylogger for OSX).
He designs many patterns to detect every malwares he could find on OSX (rootkits, backdoors, worms, ransomwares, ...) and ends up detecting all of them with one of the pattern, and having an extremly low percentage of false positive (especially if you're not a power user).
"The most successful defined pattern is constructed around the executions of Unix shell processes performed by malware."
Syscall profiling will produce false positives. For example, some installer that invokes the shell to perform some tasks it could have done via syscalls directly or indirectly though some higher level library. His benign applications only included a single third party program. (Office 2011) Moreover, it seems he only collected data from running the applications, not while installing them. If he had collected data while installing a lot of perfectly legitimate software he would have seen a lot of shell invocations.
Syscall profiling will also fail to detect malicious code using techniques such as syscall proxying.
I think the syscall hooking angle has already been done to death and has consistently failed to achieve significantly better results than signature based detection. It's a great technique for an expert to monitor a system with, but useless for the average user who will not be able to investigate whether or the cause of an alert is benign or malicious.
[Edit: There's a bigger list of applications that were tested on page 91, listing which ones create false positives. It's unclear whether or not data was gathered during installation of these applications or just while running them]
Though she also had a couple papers in a similar vein starting in 1994.
However, Masters theses are often not completely novel, and it's sometimes worth repeating work from long ago to see what's changed as operating systems and malware have evolved.
Detection and mitigation. The authors of the paper linked by sibling post continued the work to create a sort of exponential denice-ing system in the early 2000s.
Source: student of one the co-authors, read the papers.
In the ever-escalating arms race: if this detection method gains in popularity, will malware writers will adopt a form of "steganography" in their system calls, sneaking below the radar of these detection schemes?
It depends. Sophisticated malware, say a rootkit, can hide calls and even presence on the system. It can do things like modify the syscall table, register a MAC policy to alter what's returned to the rest of the system and use Mach ports to do things without tripping security systems. I say hide because you can still find the malware, it just takes a lot more work. Also the malware had to do something to get to latter point which is easier to detect. A lot of products just deal with that.
At what point, though, do we shift from trying to detect and block/remove malware to trying to prevent it from exploiting its way onto machines in the first place?
I'm sure the security industry has its reasons. It just seems like a great deal more ingenuity goes into the antivirus arms race than into hardening attack surfaces.
Those are different jobs, and both jobs are being done.
The sheer complexity of hardening makes it naive to think it will ever be bulletproof, as I'm sure you'll agree, so there will always be a call for another layer behind it.
World-facing firewall defends from the outside, strict routing and internal firewall defends the network from it self, firewalls on each server/computer defends from having exploits/worms spread like wildfire once they manage to find a crack, and detection software does it's damnedest to discover when something unwanted is happening.
Remove any of these, and the whole chain is less secure.
To make a computer completely secure, of course, you need a trash compactor and a boat to take it out to the Marianas trench, so it'll always be about balancing risks against accessibility and usability.
Frankly, I think the detection software part of the security stack just has better PR.
I am obviously not a security expert. But my understanding is that most breaches happen because of vulnerabilities that we've known about (and known how to defend against) for a long time, like
- Not deploying email encryption or even SPF, such that an attacker can convincingly impersonate others in the company by email (spear-phishing).
- Not updating software (which is necessarily exposed to a large audience by the firewall, because a large audience consumes it) when it has known vulnerabilities in it.
- Writing and running code in memory unsafe languages and not even mitigating that risk through static analysis or Valgrind.
- SQL injection and other failures to sanitize user input.
- Poorly thought out authentication/authorization schemes and bypass bugs, like URL enumeration.
- Services that make no attempt or an inadequate attempt to authenticate their consumers (i.e. firewall can't protect the MongoDB server from the web server; the whole point of the MonogDB server is to be accessed by the application tier).
- Not using TLS where appropriate.
- Not using 2FA for privileged insiders.
- Weak password reset schemes, and password expiry schemes that result in users writing them down on post-its at their workstations.
- Shared accounts.
It just seems odd to me that the security community will basically skin you alive for gross negligence if you don't have a firewall or antivirus, but this kind of stuff is more or less accepted as a fact of life.
And a firewall or antivrius is not necessarily going to do anything about it (if the attacker goes through routes that have to be open for the system to function, and writes their own exploits for which virus definition signatures don't exist).
Once a rootkit is installed, it can completely bypass system call monitors in all sorts of ways – communicating with a kernel component via a shared user/kernel memory page, or adding a new device and communicating using custom ioctls, or "backdooring" an existing system call when some userland parameter is set to a magic value, or ...
I am not at all confident that one could find such malware without human intervention.
Not if it's hypervisor-based monitoring with IO mediation. This is still a weak defence. Stronger model is kernel integrity + syscall restriction + MAC or capability protection for usage details.
I understand that once a rootkit is installed, all bets are off. I was wondering if the syscalls by which the rootkit gets installed will be obfuscated to make them look more like a benign/normal process, and evade detection by a malware-syscall-pattern-recognizer. or are some malware syscall patterns essentially "unhideable"?
"Sysdig Falco is a behavioral activity monitor designed to detect anomalous activity in your applications. Powered by sysdig’s system call capture infrastructure, falco lets you continuously monitor and detect container, application, host, and network activity... all in one place, from one source of data, with one set of rules."