One thing I like about this post is how it enumerates all of the dead ends that involved a lot of investment but produced no fruit. This is what is difficult about exploiting hard targets. What the general public doesn’t see is the massive amounts of “wasted” time involved in bug hunting. Instead, they see the headline about how some wunderkind exploited X in a matter of seconds or minutes, conveniently omitting the part where that person spent the last year staring at code, writing tools, fuzzers, test harnesses, and finding nothing at all - not knowing if they would ever find anything, or if there is even a bug to be found.
This might also be useful to counter the sometimes line of complaint that Google only runs Project Zero to make their competitors look bad.
While they found some exploitable bugs on the iPhone here, they also found (and reported) that quite a few systems did not seem currently exploitable. It seems clear this is not a hit job, at least to me.
If Google's idea of a hit squad is to pay a few hundred thousand dollars worth of salary to have people do your own bug searching for you and give you a few months to fix it before letting anyone know (or until after you've fixed it, if sooner), then sign any company I'm working for up, and I'm sure there are plenty of other companies that would love the attention too.
Apple gets attention from them because a lot of people have apple hardware and software. That means problems there have a higher impact than on some less used product.
Apple is more than welcome to respond with its own researchers finding Android and other software bugs, and we'd all be better off if they did.
I don't think Apple has a problem with this unless it's irresponsible disclosure. They get a service from P0 and they're quite fast at fixing the issue which works to their advantage. When an exploit is published it will always say "Apple already fixed the issue" which works just great for their image with customers.
"Responsible disclosure" is an Orwellian term invented by vendors to coerce security researchers, who have no duty to vendors and are virtually never compensated for their work, to conform to the vendor's own schedule and commercial preferences.
The better term is "coordinated" (or "uncoordinated") disclosure.
Those disclosures do not always impact only the vendors, but can also impact end users, so I don't think this is only about being nice to the vendors (I'm aware that vulnerabilities can be exploited even if they are not disclosed by security researchers - and that disclosing can in some cases benefit the end user - but I guess it makes it much cheaper to since you don't have to hire a team of highly qualified professionals to find them).
The point made is that "Responsible" implies an ethical obligation to tell a vendor before telling the public.
It's not clear at all that this is reasonable, and IMO, this seems to highlight the lack of incentive (or even perverse incentive) for vendors to secure their systems.
Semantics aside I hope the point I was making is clear. By "irresponsible" I meant the regular definition of the word: reckless, or careless. Not following any best practices.
Out of personal curiosity, how is "coordinated" mitigating the issue you mentioned? It eliminates the vagueness of "responsible" but seems a lot more strict for the researchers:
> The primary tenet of coordinated disclosure is that nobody should be informed about a vulnerability until the software vendor gives their permission
Ok, and I agree that "responsible" is not the right word, but let's also acknowledge the asymmetry that often exists between the users and the bad guys.
Telling users that their device or software is vulnerable is not really useful to them unless it comes with a patch. Very few users have the knowledge, skill, or access to directly alter their technology to secure it, based on a vulnerability report. Most don't even know how to be aware of such reports, or even that they should be.
"Cyber bad guys" are more likely to be aware of and able to act upon a vulnerability report, as they have more knowledge than most users, and their entire mode of operation is that they don't wait for access or permission.
This is why "coordinated" disclosure is sometimes better. By announcing the vulnerability with the patch, the asymmetry between users and bad guys is better balanced. Users can be reasonably expected to install patches when notified to do so.
Of course there are all sorts of exceptions, such as when data is actively being leaked or exfiltrated, which users could delete or remove. Or when the security vulnerability affects systems managed by people sophisticated enough to take direct mitigating action, like changing server configuration or cycling keys.
Personally, I think part of what it comes down to is respect for the people in question. "We didn't tell you because statistically you're not likely to do the right thing if anything" shows a lack of respect for people and their ability to determine their perceived best action and follow through with it.
If people really can't be bothered to follow through with what's going on, then they'll offload that responsibility to someone else if it's important enough. We already do that with IT for companies, and Anti-virus for a lot of people at home (as much as I think most those companies focus on the wrong thing)[1]. Adding information to the system allows that market to be more efficient and useful.
1: I would love to live in a world where most the protection was a combination of the OS and application vendors patching their own software, and protection consultant/anti-virus companies that knew what software you ran giving you good information on what you should and should not do on a regular basis of for short periods until something is fixed, etc. I think that's a much more valuable service than "we scan all your incoming and outgoing mail and make your computer so slow and unresponsive you think you need to buy a new one".
I think there's a balance to strike here. The fact that security researchers like the Project Zero people don't just publish exploit details and sample codes on day 1 suggests that it might not actually be in the best interest of users to do it. It may actually cause far more damage to the users than giving the manufacturers the standard time to patch. As a matter of fact every such disclosure made "irresponsibly/uncoordinated" on Twitter was universally condemned by security researchers and software providers alike.
Doing something "responsibly" isn't just the prerogative or duty of security researchers. It's a general term which means "putting thought into it".
I think you are wrong about "universally". Probably widely condemned, I doubt it was universal, unless you are drawing convenient definitions where anyone who thought it was fine wouldn't be a security researcher.
Sometimes I do wonder if companies would take security more seriously if the convention was to disclose publicly without delay. My guess is they would instead just pay more for breach insurance, or try harder to shoot the messenger.
Apple should be performing this kind of research themselves on such a high profile target as iOS. If you let a competitor discover stuff like this then you kind of deserved to get "hit".
Only if the guys working on other systems are less talented. If equally talented guys work on other systems then it's not a hit job, and I'm sure google puts its best guys to find holes in google products.
There is something to be said though, and that is that there is always something to be found somewhere. Lacking toolchains that can fully eliminate bugs, there will always be bugs. And even if there were legitimately no bugs, that doesn’t mean there won’t be bugs introduced in future releases, or bugs even outside the software itself.
The low hanging fruit has definitely been thoroughly picked these days, at least in software like iOS. But yet with pretty much near certainty, exploits find a way.
Perhaps even more fascinating, is that closing off one class of exploits always seems be followed by new kinds of exploits, like clockwork. You’d think eventually this would stop happening, but so far it hasn’t. CPU side channel attacks may not be completely new, but they’ve emerged as one of the new big targets as of late.
I really know little about security personally and obviously the amount of work that goes into serious security research should never be underestimated but I’m impressed at the inevitability of security bugs. Very few large programs escape a regular cycle of bugs. Maybe OpenBSD is the best example of Pretty Secure software.
> The low hanging fruit has definitely been thoroughly picked these days
The problem is that vendors aren't in the business of perfecting existing software. They're in the business of pushing out big, bold, feature-rich changes and additions.
The vulnerabilities found in the article support your argument: SMS and MMS, which barely ever change or get new features, were much more secure than iMessage, which constantly gets new features and architectural changes.
There was a Microsoft Research paper several years ago which looked at the relationship of exploits to age of code in BSD kernels. The number of exploits in older code diminished over time, but that was often offset by the higher prevalence of exploits in newer code. That much is intuitive, but it's good that there's some well researched empirical evidence showing that that's the case.
We can hypothesize that better languages and better mitigations (e.g. ASLR) can improve the situation across the board, and I don't doubt that that's the case, but I haven't yet seen the evidence. (Maybe I missed it?) It's probably too early as it's difficult to make apples-to-apples comparisons across those aspects.
Are you totally and completely unaware that Apple actually does dedicate some annual cycles to bug fixes, performance improvements, and stability? And that they've done this many times in the last 20 years?
So what does the software look like up until those 'annual cycles'?
It's pretty clear from the economic incentives of the companies producing the software, the sheer quantity of exploits found, and the surprisingly low amount of QA that would be needed to catch most bugs that the aforementioned view is correct.
There are tiers of low-hanging. I think we're just seeing the end of the infosec stone age.
It's still plain that OS vendors continue to make big compromises, by eg continuing to use C/C++ to handle untrusted data for decades after the risks became obvious, and we're constantly seeing C-caused vulnerabilities like Windows RDP server remote root, WhatsApp remote root, Broadcomm and Qualcomm Wlan RCEs, etc.
The memory safety laissez faire attitude has also held back the state of the art in other fronts besides memory safety, because it's not so interesting to eliminate other classes of bugs while the elephant remains loose in the room.
> some email providers also filter incoming messages and remove malformed MIME components that are needed to reach a vulnerability.
I wrote @ronomon/mime [1] for this purpose, to protect downstream user email clients from MIME bombs [2], invalid charsets which can crash Apple Mail, multiple From headers which can exploit Gmail [3], attachment directory traversals, malicious continuation indices, truncated MIME messages, stack overflows etc.
> VVM works by fetching voicemail messages from an IMAP server maintained by the device’s carrier. The server URL and credentials for this server are provided to the device by the carrier over SMS.
Visual voicemail is just IMAP and SMS? Wow, for some reason I assumed there would be some distinct protocol for that. It's pretty interesting that it's implemented using the existing software the phone already runs on.
And frustrating to learn that an open protocol with free implementations is locked behind a carrier that often requires significant fees for this feature.
I remember reading Visual VoiceMail was a patent feature ( Not by Apple ) and require some software and patents fees before being offered to customer, and hence the cost.
Otherwise I love VMM, and it really should be a standard on all iPhone and Carriers.
I dunno, but here's a way to visual voicemail your own mail (this is how I have been doing it since 2015):
1. sign up with a voip provider that either terminates on your own pbx (e.g. asterisk/freepbx) or provides voicemail with emailed audio files. setup said service with voicemail.
2. setup your favorite container system and let systemd manage the service
3. create a container that watches for new audio voicemails , when it finds a new voicemail push it to your favorite cloud provider (s3, gcp bucket, azure etc) then trigger that providers speech-to-text function on the audio file in the related cloud storage. store the resulting text message and metadata in a db.
4. create another container that simply watches that db for new messages and push out alerts however you see fit. you have many paths here. i went with a web-based ui that pulled the data ala google voice.
3 years ago mozilla dropped deepspeech which changed everything - https://github.com/mozilla/DeepSpeech
i have since replaced step 3 with my own deepspeech server. none of my data goes to any of the major privacy violators with this setup.
I know I’m the late 2000s, if you had a smartphone there was a line item charge $10-15 for smartphones. If you had a blackberry you got BES, if you had Apple you got VVM.
The excuse the carriers used for charging for it was always bullshit. But like hotel WiFi, if you had one of these smartphones you didn’t care because work was paying for it.
> We looked at this extension in great detail, looking for a way to spawn a WebKit instance on the receiving device, but did not find any. The WebKit processing always appears to be done by the sender.
This is done for privacy reasons, AFAIK: it prevents “read receipt” links like you might find in emails. It does however allow for the sender to attach a misleading thumbnail, though.
The final few paragraphs touch upon how expansive the attack surface can be due to this serialization code. So, yes the libraries are terrible.
Asking the HN audience: Is there a set of design principles that the iMessage team can follow to make these more resilient to such attacks while retaining their usability? As a non-Apple employee whose globally dispersed family relies on iMessage to stay in touch, I have a vested interest in the security of my family’s iPhones. I know it’s rare for Apple employees to comment, but it would be great if someone from Apple can comment on whether these libraries are being re-architected in some way. This will cut through any FUD that arises from this disclosure / discussion.
If this aims to enumerate all the vectors, it seems to be missing the lower level radio & networking attack surface - eg the WLAN one that was a high-profile Project Zero finding some time ago. And common networking things like DHCP client, mDNS, VPN protocols, etc. Probably there are WLAN driver style vulnerabilities in the cellular side too.
How many people were working on this project for how long? Also, is APNs something they took a look at? No mention of it in the article, which is surprising. Maybe they figured that it would be impossible to inject something into a push notification.
I asked this question the last time this came up and the answer I got was "Project Zero does not accept bounties. They generally ask for the money to be donated."
That makes much more sense, it's not like Google needs the money, so going to charity is much better. Now the follow up question is, does Apple do that?
Maybe they have an image incentive to make more noise for iOS bugs than for Google ones but that's about it. Otherwise what they do is actually helping Apple, who patches really quickly.
So what's the motivation of Google? Are they are charity now?
I phrase it a bit unnuanced, but it does leave me a bit puzzled. I suppose companies can be nice and not nice at the same time, but I'd rather look at the level of incentives first before looking at the level of being nice.
Google has said in the past "we make our money from usage of the internet, and so anything that increases internet usage helps us".
That was their stated motivation behind things like Chrome - increase the speed and utility of webapps so that more things could be online. It works in this case because they are improving security and therefore user trust in online services in general..
The cynic in me is sure that that's not the only reason, but it's part of the picture.