> You will need to decide whether to attempt to jailbreak the device and obtain a full filesystem dump, or not.
Since Apple won't allow iDevice owners to access an unredacted raw disk image for forensics, iOS malware detection tools are hamstrung. The inability to fully backup devices means that post-intrusion device restore is literally impossible. Only a new OS version can be installed, then a subset of the original data can be restored, then every app/service needs to re-establish trust with this newly "untrusted" (but more trustworthy than the previously trusted-but-compromised) device.
In theory, Apple could provide their own malware analysis toolset, or provide optional remote attestation to verify OS and baseband integrity.
In the absence of persistent disk artifacts, the next best option is behavioral analysis, e.g. usage anomalies ("dog that did not bark") in CPU, battery, storage or network. Outbound network traffic can be inspected by a router and compared against expected application and system traffic. This requires an outbound firewall where rules can specify traffic by wildcard domain names, which are widely used by CDNs. Apple helpfully provides a list of domains and port numbers for all Apple services.
> Since Apple won't allow iDevice owners to access an unredacted raw disk image for forensics, iOS malware detection tools are hamstrung.
And it's not just Apple.
Android is just as bad, and even worse for the user because while iOS backups are consistent in backing up everything sans stuff in the Secure Enclave (i.e. credit card and eSIM keys), in Android support for backup is optional for apps and there are many games who just outright don't do any kind of backup.
This is true and I resent it. However, at least you have the option of installing a ROM that supports toggling adb root out of the box. That alone solves 99% of the issues I have with Android in practice.
> However, at least you have the option of installing a ROM that supports toggling adb root out of the box.
That's not valid for all devices, all Samsungs need a cooldown of one week (Knox lock, presumably to thwart people from rooting stolen devices to bypass antitheft), all modern Androids require a full wipe of the device as part of rooting so it's useless for forensics, and a shitload of apps will flat out refuse to work on rooted devices - forget many games, forget anything with streaming, forget banking apps.
It works for forensics if you already had the OS installed. The fact that the process of flashing a new OS wipes the device is a good thing (consider what the alternatives are).
Obviously I feel the user should always have had root. Switching the OS is a fix for that. Not choosing to do that is the same as the choice to purchase a locked device.
> shitload of apps will flat out refuse to work on rooted devices
People say this but so far most stuff has worked for me on lineage with microg. The adb root toggle isn't detectable as far as I know. Their only realistic option is to require SafetyNet.
At that point we've digressed from a conversation about forensics, root access, and switching the OS to one about the evils of widespread remote attestation.
I agree that centralized and ubiquitous remote attestation is evil. So disable it. Don't use services that require it. Don't use anything that requires DRM either, since that's one of the primary driving forces.
> Even with all the various hacks enabled.
Those were never going to work long term. Hardware based remote attestation can't realistically be bypassed by the end user.
>iOS backups are consistent in backing up everything sans stuff in the Secure Enclave
Do they now back u TOTP generators? I lost access to an account I had since my teens because when restoring from backup, I had no MFAs in my Google Authenticator. Since I had imported my teenage cell # into Google Voice, when the backup codes I'd generated for the account failed to restore access, I lost access to my gmail + my phone number I'd had for decades, despite taking what seemed to be reasonable steps.
(I'd backup my iPhone to my laptop, and backup my laptop to a USB hard drive, one of which would live in my house and another in a secure offsite location.)
Well unfortunately, if the backup method is SMS MFA, and that SMS # is now behind Google's cloud, you can become locked out. Really terrible, from a UX perspective -- I managed to go decades without a data loss, and then poof -- every email, calendar, and contact from my late teens to my 30s erased.
The fact that iPhones are hard to dump is actually the main protection against threats when your phone is stolen or taken away from you (from a more or less legitimate-looking organization or person). It's a pretty good thing overall.
Why must that prevent backup from an Apple Configurator MDM-supervised device that is paired to an admin Macbook, with MDM policy to prevent mobile pairing with any other Macbook? There is a full cryptographic chain to verify the supervising device, which already has full MDM policy control of the mobile device. What security is being added by preventing that authorized supervisor from doing a forensic backup?
> provide optional remote attestation to verify OS and baseband integrity
And lock us out of our computing freedom while they're at it.
Remote attestation enables discrimination against free computers owned by users rather than corporations. They could theoretically allow users to set their own keys but it's not like apps and services are gonna trust people's personal attestation keys, they're only gonna trust Apple's and Google's.
This is among the most dangerous developments in cryptography to date and it's gonna end free computing as we know it today. Before this, cryptography used to empower people like us. Now it's the tool that will destroy our freedom and everything the word "hacker" ever stood for. Malware is a small price to pay to avoid such a fate.
It's not going to be "optional" either. Every major service is going to use it. Guaranteed.
> Remote attestation enables discrimination against free computers owned by users rather than corporations.
Not when my mobile device is attesting to my home server with OSS attestation software, or my USB Armory with OSS firmware for local "remote" attestation. GrapheneOS can attest to a 2nd mobile device running GrapheneOS, or a web verifier. This is not rocket science. Provide a mobile setting for attestation server URL.
> Every major service is going to use it. Guaranteed.
Hence there must be a mandatory option to define your attestation server. Advocating for the right to specify and/or host your arbiter of device trust (including firmware RoT) will do infinitely more for freedom than arguing against cryptography.
> Not when my mobile device is attesting to my home server with OSS attestation software, or my USB Armory with OSS firmware for local "remote" attestation. GrapheneOS can attest to a 2nd mobile device running GrapheneOS, or a web verifier. This is not rocket science. Provide a mobile setting for attestation server URL.
No, dude. Look at Google SafetyNet / Play Integrity. It's used by banking apps, streaming apps, certain games, and much, much more, to lock out devices that don't pass. I believe one of the last Android devices that will ever be able to pass SafetyNet while rooted is the OnePlus 7 Pro. Not that I'm ever going to tweak on Android again until TWRP adds a setting to disable OpenRecoveryScript, since a complete lack of prompting for consent is how I had my last major data loss.
(Apparently it would kill them to add anything like a "script execution in 5 seconds, cancel?" popup.)
Due to hardware remote attestation that cannot be bypassed, there is no longer any point to using Android. We used to own our devices. Not anymore. Might as well get an iPhone and enjoy the better kept garden. I wonder if there's a Termux equivalent for iPhone.
Fully open Debian Linux VMs (and possibly Windows VMs via GrapheneOS) are coming to Android 16, which can run desktop GUI apps in those VMs. Already shipping in Android 15 on Pixel devices.
ISV/app misuse of remote attestation does not preclude valid use cases under device owner control. Android Virtualization Framework is the first step in reducing the over-broad conflation of device measurements with "security". It can lead to narrower measurements and attestation of specific OS components , while opening up other components to user modification without "breaking" device verification.
> Android Virtualization Framework is the first step in reducing the over-broad conflation of device measurements with "security". It can lead to narrower measurements and attestation of specific OS components , while opening up other components to user modification without "breaking" device verification.
Okay... and then someone releases some new "security" library with an all-or-nothing philosophy that contains every possible check under the sun for any kind of rooting, modification, customization or even unlocking - and then all the banking apps start using this.
You can't win against security theater. You just can't.
> then someone releases some new "security" library with an all-or-nothing philosophy
Don't be demoralized by PTSD :)
AVF/pKVM is not security theater, especially if "apps" are incorrectly using attestation. pKVM provides strong isolation between Android and other VMs, using CPU support for nested (2-level) virtualization. The Android "host" VM can be isolated from the Debian Linux VM.
Search for pKVM technical videos. Implementation code was upstreamed to mainline Linux around 2021 and is public.
Banking websites work on desktop Linux browsers, which can be run in the isolated Debian Linux VM.
I said the banking apps are full of security theater. That's why they do root checks and such. AVF/pKVM will not prevent apps from incorrectly using attestation. If there's a way for an app to check for root or any possible deviation from fully trusted and unmodified, then it will be checked by certain types of apps, like banking apps, that rely on security theater. To be clear, the checking everything possible and completely locking you out if anything is even slightly off is the security theater. Not AVF/pKVM itself.
> checking everything possible and completely locking you out if anything is even slightly off is the security theater
Sadly not the first or last time that technology is wielded imprecisely or carelessly. Improvement options include:
1. Marketing and rewarding non-theatrical attestation
2. Open training content for attestation best practices.
3. Symmetrical 2-way attestation of open components.
4. Automated CI/CD detection of over-broad attestation.
5. IETF or other advocacy to improve attestation protocols.
6. Legal/regulatory mechanisms.
The previous DRM box (TrustZone) didn't offer positive side effects like a Linux VM where the user can have root and install software without an app store.
This has nothing to do with attestation servers. It's about who the corporations trust. Namely, each other.
Your attestation server doesn't matter. The corporations are not gonna trust any attestation provided by your home server running open source software under your control. They're not gonna trust GrapheneOS's AOSP attestation where you provide your own keys. Simply because your open source software has the power to straight up wipe out their entire business models if left unchecked. They'll deny you service if you use it.
Think about it. You can reverse engineer their apps and network protocols and build better software that doesn't advertise to users, that doesn't collect their information, that automates boring tasks, that copies data they don't want copied, that transmits data they want censored. This stuff directly impacts their bottom line and they absolutely want cryptographic proof that you are not doing anything of the sort.
They're not gonna trust your keys. They're gonna trust Google's and Apple's. Because their interests are aligned with Google's and Apple's, and not with yours.
They've set things up so that they own the computers. They're just generously letting us use them, so long as we follow their rules and policies. If we hack the computer to take control of what should be ours to begin with, they call it "tampering". And now they have hardware cryptographic evidence of this "tampering". This allows them to discriminate against us, exclude us. Since it's hardware cryptography, it's exceedingly difficult to fake or bypass.
This is the future. Either you use a corporate pwned computer, or you're ostracized from digital society. Can't log into bank accounts. Can't exchange messages over popular services. Can't even play stupid video games. Can't do much of anything unless somehow hackers create a parallel society where none of this attestation business exists.
What good is free software if you can't use it? It's worthless.
> This has nothing to do with attestation servers. It's about who the corporations trust. Namely, each other.
I'm glad the conversation has moved from attestation to trust :)
If you look at inter-corporation contracts, it's clear that corporations don't trust each other. We're in a neolithic era of attestation, used primitively with wides collateral damage. More granular options exist, look at the architecture of QubesOS for one example. Android Virtualization Framework should enable more examples.
Remember when SSL certs were monopolized by a small number of players? Then the push for HTTPS usage lead us to LetsEncrypt.
There's no technical reason that a similar organization could not exist to improve tooling and coordination for decentralized and meaningful attestation of specific components (note NOT devices) and the security architecture by which components are composed into devices.
All is not lost, these are only early contests of competing visions.
The fact there are no technical reasons preventing things from being good is irrelevant: there are countless business and political reasons, and those are the ones that matter.
It doesn't matter that better technology could theoretically exist. It matters that remote attestation almost perfectly serves the interests of corporations and governments.
The better, more granular technology doesn't matter. The banks won't use them, they'll say it enables fraud and money laundering. WhatsApp won't use them, they'll say it enables spam and scams and abuse. Streaming apps won't use them, they'll say it enables copyright infringement. And so on, and so forth. The only technology they'll use is the one where they maintain control over the machine.
They will not tolerate the machine being yours. Because if you own the computer, you can make it spam people and copy movies if you want to. They gotta own the machines. If they can't, they'll take their balls and go home.
Are banks blocking desktop web browsers? You can access bank websites using a desktop web browser in the Debian Linux VM that is running in parallel to the Android VM. No app store, attestation or DRM needs to be involved.
Absolutely. My bank does not allow many operations via web browser anymore. It directs me to use the mobile apps. "Fraud prevention". All banks in my country are like that.
They only allow internet banking on a personal computer if you install their "security module". It's a kernel module that makes the computer incredibly slow. Once upon a time I tried to reverse engineer that thing to figure out why and I caught it intercepting every single network connection. That told me all I needed to know.
They want to own our computers. They think it's justified. As if "fraud" excuses everything. There is no limit they wouldn't cross. It's about control. They want to have all the control while we have zero.
In theory, pKVM could encapsulate a web browser with spyware kernel module into a dedicated VM that cannot see other traffic. The bank could "own" the banking client VM, while the device owner could run other VMs of their choice.
This merely isolates the problem. It still means we don't fully own our machines.
These virtual machines you speak of would be running on our machines but configured so that we actually have zero access to them. Do we really own the machines if we can't see the code they're running? If we can't view or edit the memory?
Those virtual machines are little foreign embassies on our machines that lets them claim sovereignty over our computing resources. It's our land but their territory and laws. Our computers, processors and memory but their code and data. They carve out little niches out of our own hardware that even we cannot access.
Stuff like this cannot happen without them usurping some amount of power from us. And they will probably usurp far more than they need to. Because they can.
Would DNS logs suffice? You could use service that offers logs of DNS like NextDNS or a Pi-Hole to watch DNS traffic from the device, but you wouldn't know which app sent it and for what purpose.
Has anyone seen an iOS device fail to boot due to an integrity violation?
Whatever it's verifying is insufficient to stop persistent iOS malware, hence the existence of the MVT toolkit, which itself can only identify a small subset of real-world attacks. For evidence, look no further than the endless stream of zero-day CVEs in Apple Security Updates for iOS. Recovery from iOS malware often requires DFU (Device Firmware Update) mode reinstallation from a separate device running macOS.
Non-persistent iOS malware can be flushed by a device hot-key reboot which prevents malware from simulating the appearance of a reboot.
My point was that people usually have no idea they've been compromised therefore won't reboot their device so the malware becomes virtually persistent.
> Whatever it's verifying is insufficient to stop persistent iOS malware, hence the existence of the MVT toolkit
One of these assertions absolutely does not support the other; the newest persistent malware detected on iOS by MVT is from 2023 and targeted iOS 14. In iOS 15, Apple introduced System volumes and SSV. The OS lives on a separate APFS volume snapshot which is verified using a hash tree (think like dm-verity, although the implementation is at a slightly different level). Even Operation Triangulation couldn't achieve reboot persistence for their implant (which Kapersky call TriangleDB); rebooting would require re-exploitation.
This also affects your argument about "forensic" imaging (also - if you're asking the device for the image, it's always a logical extraction; if you don't trust the device, why do you trust the backup data you asked it for?): post-iOS-15, unless boot security was compromised, in which case you have bigger problems, you'll get the same bytes back for system files anyway.
> why do you trust the backup data you asked it for?
Devices could load minimal recovery/forensic images from a trusted external source (Apple Configurator USB in DFU mode?) or trusted ROM (Secure Enclave?), rather than loading a potentially-compromised OS.
> the newest persistent malware detected on iOS by MVT is from 2023
Thanks for the details on dm-verity-alike protection. There's been no shortage of zero-days patched by Apple since 2023. If there's a zero-day vulnerability in an iOS binary which parses persistent user data from the non-OS partition, the vulnerability can be re-exploited after reboot.
Now that you mention APFS snapshots, it would be wonderful if Apple could enable a (hotkey-selected) advanced boot option to (a) boot iOS without parsing any data from the user partition, (b) transfer control to Apple Configurator for user data snapshot export or rollback.
Do you know how iOS is isolated from non-Apple radio baseband firmware?
Most modern malware is not disk resident, as it has a higher probability of persisting by re-infection with an undocumented zero-day.
For example, people that play games that bind the GPS location services will find interruptions magically stop for awhile after a cold power-off, and power-on restart. Or the battery performance suddenly stops quickly losing power in standby, as recording/image capture was burning power and data budgets.
Ultimately, a smartphone is impossible to fully secure, as the complexity has a million holes in it regardless of the brand. And Gemini is a whole can of worms I'd rather not discuss without my lawyer present. =3
I recently had the "pleasure" of reading over a criminal forensic investigation report. It was harrowing. The report was basically like "we ran virus check and it reported clean so nobody could have accessed the system remotely" and then it moved right along to the next thing. The logic felt more dubious than some of the court scenes from Idiocracy. And it had been produced for defense counsel and paid for by the defendant.
I have no idea what arguments were actually made. But that concern was raised somewhere along the chain asking for my (informal technical) opinion.
It's obviously quite difficult to prove a negative in general, but the complete lack of any standard of care then presented as an "expert opinion" for the defense was astounding.
(FWIW this was a MS Windows machine, and I think the AV was just Windows Defender)
The lack of standards falls on the acting part. I ran a quick search and found that SWGDE best practices guides and documents do consider the case for the presence of malware on the digital evidence sources on many different scenarios [1]. Having an "expert" who is unaware of these guides is another story.
Do you have anything specific you're pointing to in those search results? Reading the excerpts, all but two are talking about malware on the analysis machine.
2012-09-13 SWGDE Model SOP for Computer Forensics V3-0 merely says to detect "Detect malware programs or artifacts".
2020-09-17 SWGDE Best Practices for Mobile Device Forensic Analysis_v1.0 seemed the most in depth, and it merely states:
> 9.4. Malware Detection Malicious software may exist on a mobile device which can be designed to obtain user credentials and information, promote advertisements and phishing links, remote access, collect ransom, and solicit unwanted network traffic. Forensic tools are not always equipped with antivirus and anti-malware to automatically detect malicious applets on a device. If the tools do have such capability, they do not typically run against an extraction without examiner interaction. If the examiner’s tools do not have antivirus/anti-malware capability, the examiner may need to manually detect malware through the use of common anti-virus software applications as well as signature, specification and behavioral-based analysis.
No, I just went to search if the topic is mentioned in guidelines (which it is, multiple times). I'd then expect a (good) expert to pick on those breadcrumbs and search on how to do that (if they don't have the skills already). If I were working on a computer, I'd try to find IOCs that point to an infection (or lack of evidence for it).
If there's a memory dump to work on, a more in-depth analysis can be done with Volatility on running processes, but it usually falls back on the expert having good skills on that kind of search (malfind tends to drop a lot of false positives).
But at least the guides gave a baseline/starting point that seems to be better than what was described. It's very difficult to prove a negative, so I'd also be careful with the wording, eg: "evidence of a malware infection was not found with these methods" instead of "there's no malware here".
What I quoted perfectly describes what they did. Ran one off the shelf antivirus scan and then considered the concern addressed.
It's obviously impossible to disprove a system had malware on it, but that fact itself should be part of any expert testimony. Especially testimony for the defense in a criminal trial.
I'd be curious if anyone has tried this for Android and what kind of stuff it's checking for. Sideloaded APKs can often contain malicious stuff, but it's nearly impossible to know if it's doing anything suspicious unless you open it up with a tool like Apktool [1] or run it on Triage [2] as it supports Android and watch what it's doing. Most antivirus for Android is pretty much a joke, as far as I'm concerned.
Does the iPhone / iOS track the profiles of the machines it is physically connected with and when “Allow Access” is selected? I ask because I did not have face authentication or a password on my phone and my ex-landlords illegally obtained my exempt property and I would like to know if they plugged it in to their computer and potentially obtained personal files from it. Yes I know the lack of security was an oversight and failure on my part. I accept that. However, they also tried to steal my car and sell it and refuse to return my property they are not legally entitled to possess (“tools of trade” under Texas law). The legal process takes time so I’m just curious if such a forensics investigation is possible.
I think that if your iOS version is latest and some basic code to unlock your phone is set and even if you're not logged in, it will not make storage available because you wouldn't be able to set the options to up backup your data, besides allowing it
Hard to tell with Apple stuff. The idea on the "way of getting it to device to run persistently, not until reboot" is quite different too often. There was Pegasus
I mean did you have some sort of code (I cannot remember the name) set. Or what did you see if screen is off and you hard reset it, or at least soft reset, or just locking it with power button and waking in up the same way?
> You will need to decide whether to attempt to jailbreak the device and obtain a full filesystem dump, or not.
Since Apple won't allow iDevice owners to access an unredacted raw disk image for forensics, iOS malware detection tools are hamstrung. The inability to fully backup devices means that post-intrusion device restore is literally impossible. Only a new OS version can be installed, then a subset of the original data can be restored, then every app/service needs to re-establish trust with this newly "untrusted" (but more trustworthy than the previously trusted-but-compromised) device.
In theory, Apple could provide their own malware analysis toolset, or provide optional remote attestation to verify OS and baseband integrity.
In the absence of persistent disk artifacts, the next best option is behavioral analysis, e.g. usage anomalies ("dog that did not bark") in CPU, battery, storage or network. Outbound network traffic can be inspected by a router and compared against expected application and system traffic. This requires an outbound firewall where rules can specify traffic by wildcard domain names, which are widely used by CDNs. Apple helpfully provides a list of domains and port numbers for all Apple services.