Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not for long. Will be patched in hours, I bet.


This would actually be a fun interview question - how to emergency patch 1B+ globally distributed mobile devices. I would say at least several days for the obvious QA which needs to be done.


>I would say at least several days for the obvious QA which needs to be done.

And you would unfortunately, not get the job.


Well, there's probably a new position on the QA team opening up soon in any case.


Desperate times call for faster-deployment. Been there. Sometimes, you might even need to patch a binary.


For this? I seriously doubt that this would require binary patching. A simple recompile with the fix should be good enough.


Not for this one, no.


Why would you need to patch the binary? If you're pushing out an update to fix the issue, the update may as well push out a rebuilt app than a binary diff.


I had been involved in a really hairy service interruption that involved hardcoded urls into an sdk (http://a.com/b.js), and a.com was remapped to a brand new web service / target which no longer had the file. As if it's not enough, the server returned 301 permanent redirect with 60 day ttl to http://c.com. Our traffic dropped 40% because of that mistake (not 100 because in previous couple releases i have changed that url to be something that's widely used and had no risk of getting messed up, but old clients still had the urls embedded).

As if that is not enough, the IOS library for some reason had implemented "hand rolled "caching with a lot of hardcoding.

To fix the issue... i made c.com homepage also serve b.js at the bottom.

There is no way in hell this would be committed and tested within qa timelines of either a.com set up or c.com setup.

It's rather complex to fully explain without giving more context on what it is but when traffic drops 40% and you are generating 5M$/day.... You do whatever to bring it back.


I might have misunderstood your previous comment. You were saying that for this specific incident with Apple they wouldn't need to do binary patching, right?


Yes, they probably wouldnt.


They would probably need to at least blacklist every affected ios version from the facetime servers, there will be a long trail of devices not updating soon


This is assuming there is no possible server side mitigation.


Even if there is a server side mitigation, it's a big black eye to have devices being remotely tappable with no user interaction.

The devices are obviously not trustworthy anymore with the current software, and you are at the mercy of apple's servers. So a spying apple could always undo the server side mitigation (if even this is mitigatable server side).

It's also a wakeup call to see that it is even possible for devices to start sharing audio or video with no user interaction. Obvious in hindsight for a software engineer perhaps, but the public perception might be forever changed.


I mean, you can disable Facetime:

https://www.imore.com/how-to-turn-on-off-restrict-facetime-i...

So I wouldn't get all the tinfoil over this matter, I would probably disable Facetime if I were most people though. Then it's likely to only be an issue of audio being streamed, much less horrible than inappropriate video streaming. I would hope they roll out a server-sided fix initially. "If calling user, if user ads themselves to group call, hang up group call" or some silly logic. I rather see that first, and then the client-side fix.


> but the public perception might be forever changed.

No. Human fix their first impression and seldom change.

Soon after Apple release a fix, they will boost how fast Apple fix bugs.

It take a decades to realize no software is secure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: