I can agree that things in macOS/iOS settings may be broken or unintuitive or whatever (I have no examples but I'll believe you).
But a shitshow? No. Window's Control Panel is a shitshow.
What I'd like to know from you is, if macOS/iOS settings isn't the closest thing to perfect, what is? My Gnome settings control panel on my main desktop is a certified shitshow. But macOS/iOS? Nah
The old macOS settings wasn't perfect, but it was much better. On iOS, I don't know, I think it's just a very hard problem. iOS has a ton of settings, but honestly the only way to fix the settings app on iOS is perhaps to start removing features.
It's not a straw man argument, it is the argument.
An average user can't dive into the bluetooth driver code and figure out where in the 4000 page spec something deviates and is now a security issue. So we have to assume the worst.
Having an open standard doesn't mean every implementer will do so in good faith and using best practices, the consumer still ultimately has to make a choice about which product they use, and can continue to use apple's solution if they trust in apple to securely implement the spec. The insane strawman you're proposing is that the choice is between a single blessed solution from Apple that's infallible, and a wild west where the only way for a consumer to be safe is to personally audit their device against a 4000 page specification document. Absurd! We use devices and software every day that implement open standards and while issues do arise in particular implementations, they do with no more frequency than issues are discovered in proprietary solutions and standards.
Go look at the CVE's for iMessage, plurality of RCE's on apple devices in the last decade is Apple's iMessage implementation, and it's their own protocol! And almost all of the rest are apple's implementation of the open web standards!
There are maybe three tech companies in the US that have large security groups dealing with persistent threat actors. Apple is one of them. Google is another.
Even with that (large) Apple security group, iMessage is difficult to lock down properly, as you note. However, I think that the cost of 0 day subscriptions for iOS vs Android tell a pretty good story: iOS zero day subscriptions sold to intelligence agencies/governments cost roughly $1mm / seat (phone compromised). Android -- $10k.
There are many many decisions along the way that end up with that raw 100x additional cost for iOS security breaches -- value Apple delivers to its customers when they purchase iOS products.
You cannot pick and choose from the outside and know which of your preferred opening-up implementations would impact that cost. My argument is that opening this up is one of likely hundreds of possible decisions that would contribute to lowering that cost of exploit.
You are just wrong about 0-day values, e.g. exploit vendor crowdfense's publicly offered rewards for mobile 0-days:
SMS/MMS Full Chain Zero Click: from 7 to 9 M USD
Android Zero Click Full Chain: 5 M USD
iOS Zero Click Full Chain: from 5 to 7 M USD
iOS (RCE + SBX): 3,5 M USD
Chrome (RCE + LPE): from 2 to 3 M USDD
Safari (RCE + LPE): from 2,5 to 3,5 M USD
And "large" tech companies despite having "large" security teams (and "large" scope!) are far from the only ones competent at securing devices/software against PTA. Node.js, linux, bsd's, bitcoin, RoR, firefox, curl, etc. etc. There are dozens of open source projects with 0-day values in excess of 7 figures, (and plenty of private enterprises too!) and apple and google are not in any way specially equipped (or better than others) at dealing with the most dangerous PTA's in the world just because they have the largest armies of overpaid EE/CS grads.
I’m past the edit window unfortunately: you’re completely right as far as I can tell.
NSO leaked pricing has not historically differentiated Android or iPhone. I’m not sure where I heard those numbers, but thanks for the correction.
Tiny tiny nit - paying the same for an exploit doesn’t mean you’ll charge the same, but in this case it looks like the value and price structures are what you describe. Sorry!
Slightly less small nit - securing hardware, os and cloud inside some security perimeter model is a lot harder than securing, say, the bitcoin client. So point taken - and, it’s hard at scale, not easy.
It helps that I enjoy coding, and that I deeply care about the data I'm preserving. I got burned after losing 2-years worth of unbacked-up data scraped off Twitter right before they closed the API, so I'm never ever going to get that data back again.
The backup automation evolved pretty organically, but slowly. I was happy when I finally was able to get the weekly backup process to start automatically.
I used to save everything by default and over the years (~20) my storage requirements started getting out of hand.
So I had a change in philosophy where I decided to throw everything into a "to delete" folder and start with a single flat folder structure and go through everything file by file and put it in the "keep folder" and really evaluate whether I needed it. As a result I ended up with about a 90% reduction and I don't feel like I'm missing anything.
Yeah, this. The data I am most concerned about is not even 1GB after compression. That's all my $HOME configs and all my projects I am working on. Then I have some open datasets I like to fiddle with (mostly *.sql.zst compressed DB dumps) which I periodically dump on my Linux server (weekly with rsync) and finally -- video.
Video is obviously like 99.99% of everything but I have made sure to store all the sources of it (mostly downloaded playlists from YouTube) and I have scripts that synchronize the videos from the net to my local folders. Even tested that a few times and it worked pretty well.
So indeed, in the end, just find what's most valuable and archive that properly. In my case I have one copy in my server and 5+ copies on various cloud storage free tiers. All encrypted and compressed. Tested that setup several times as well, I have a one-line script to restore my dev $HOME, and it works beautifully.
Also, a brain evolved to be a stable compute platform in body that finds itself in many different temperature and energy regimes. And the brain can withstand and recover from some pretty severe damage. So I'd suspect an intelligence that is designed to run in a tighter temp/power envelope with no need for recovery or redundancy could be significantly more efficient than our brain.
This made me think.. I suspect someday AI will be able to just drop into a piece of commodity hardware and sort itself out and even compensate for shitty hardware a bit.
For instance, if it can't control a hand properly, it'll just run evolutionary algorithm on it for a while, compile the result and upload it into the hand controller and be off in about 4-5s.