In the history of the industry no mass-market computing platform has been safer than the flagship hardware/software platforms from Apple and Google --- on no platform does an exploitable vulnerability cost more to obtain, and no platforms have ever been more capable of establishing secure channels between themselves.
SS7 is insecure. But operational practices at both the carriers and inside governments rely on those insecurities to get jobs done, and some of those jobs are important and enjoy wide support. Anything we do to shore up the security of SS7 will, almost necessarily, include compromises most of us here will find hateful, and we'll be stuck with those compromises for another generation.
Rather than "fixing the potholes" in GSM and SS7, we could instead accept that the cell signaling layer is insecure, and route around those weaknesses with application code that can establish end-to-end secure channels accountable only to their users. That's pretty close to what Apple has already done with SMS text messaging, which opportunistically upgrades to Apple's secure iMessage protocol. We can do even better than that!
That's what we've done with the Internet, where this approach is called "the end to end argument in system design". It worked there and will work just as well for telephony.
Who needs to break crypto when you have a baseband processor relay back location, audio and video?
The problem with the potholes such as silent SMS are not that they exist, it's that baseband manufacturers have demonstrated unwillingness to address them. Alongside other readily addressable things such as IMSI catchers.
It's cool we made the application processor secure, but it's pretty pointless when the 5G chip is in fact a hostile implant.
The baseband on an iPhone doesn't see the video or audio frames fed into Signal (or, for that matter, Skype and Hangouts). And the point of end-to-end encryption is not trusting the baseband to set up a secure channel.
Yes. The baseband on an iPhone is effectively just a USB peripheral. Did you think Apple spent tens of millions of dollars designing a custom secure enclave processor running a separate operating system unrelated to iOS and just said "fuck it, give the whole system to the baseband vendor"?
"To protect the device from vulnerabilities in network processor firmware, network interfaces including Wi-Fi and baseband have limited access to application processor memory. When USB or SDIO is used to interface with the network processor, the network processor can’t initiate Direct Memory Access (DMA) transactions to the application processor. When PCIe is used, each network processor is on its own isolated PCIe bus. An IOMMU on each PCIe bus limits the network processor’s DMA access to pages of memory containing its network packets or control structures."
It's still a common meme that on modern phones the baseband has full access to whatever it wants. The available evidence (which admittedly is very scant) and common sense suggests that this is not true, not in iPhones or common chipsets used in Android phones.
It may have been true on older phones with simpler system architectures but you're really going to need some new evidence to show the meme still holds true.
available evidence (which admittedly is very scant)
How do you figure? When Apple or Google say 'baseband is constrained from accessing OS memory in the following ways', these aren't unverifiable claims. People would be doing demos at conferences showing malicious basebands thieving your private catpictures.
Mh, from GSM official docs SIM cards are small OSes with yes a limited power, but not that limited, they can officially set various kind of phone options, use speakerphone etc.
Of course that does not automatically means that those systems do exists like fictional echelon project but potentially power is there and know tech also...
> Who needs to break crypto when you have a baseband processor relay back location, audio and video?
Not exactly a citation for the exact claim, but several people in this thread make the assertion that Qualcomm processors have some kind of location tracking service running on them, running under a parallel OS:
I think that's pretty close. The baseband processor is just one of the many hidden nooks that can run privacy-invading/security-breaking code outside of user view or control.
Also, I think it stands to reason that the baseband processor has everything it needs to at least relay a rough location: the list of in-range towers ~= user location, and it has full control of a channel to relay that information.
Audio and camera don't inherently need to be hooked through the baseband. And without a serious redesign of the cell protocols, any use of the cell network spills your location.
With the baseband and application processors being logically separate, strengthening the isolation between them is straightforward.
For starters, you can use a phone with a discrete baseband chipset, like the Samsung Exynos lines.
Or use a Wifi-only application computing device with a separate not-always-on Mifi for a data connection.
If you want to be more paranoid, use an application processor with an Atheros chip and MMU, or a third device as a protocol converter to translates Wifi to RS-232.
But the real point is that with IP-based protocols and Free software, you can unilaterally do any of the above without requiring your friends to also do so in lock step.
This little fella hangs an lte modem off a qca wisoc via usb (on a mini pcie slot). The stock modem is a little spotty, but is trivially replaced.
They ship with a fork of OpenWRT, or you can use an official OpenWRT build on it. When I find a few free hours, I'm hoping to put together a set of Buildroot configs as well.
This is an interesting perspective, however, I think this still falls short in a few areas where users have a reasonable expectation of privacy and security. Just a note on background, I spent over 9 years in cellular telecom, so take these biases as a long term industry insider.
Apple/Google have made great progress in several areas, but there are just pieces of the puzzle we entrust the carriers with:
General Location Information:
By the nature of the way cellular networks work, they require all times the device is powered the relative location of the device. There are lots of rules and carrier preferences around this, but when the radio is active, the cellular network needs to know which are the closest towers, and when the radio is inactive, a wider location (think a postal/zip code).
I think most people would reasonably consider leaking this information or allowing it to be public unreasonable.
Specific Location Information:
So I'm not totally sure about this one, since I unfortunately never took the opportunity to test it. There exists a diameter API for e911 services, that allows requesting the GPS coordinates of a device. I never tested this out, to see if the device would notify if this API was used, or whether it was only functional form within a 911 call.
So take it with a grain of salt, but embedded within this might be the possibility to continually request specific GPS coordinates of a device.
Denial of Service:
A large issue with many of these weaknesses, is it can lead to targeted denial of service. So if I have access to the diameter network, I can send routing updates to a location where you aren't, and continually deny you access to the network.
The device might be secure, but if you can't use it, it's still a problem. This wouldn't require much sophistication, probably on a similar level to say running a ddos attack against a website.
Inflated Billing:
Another side effect, I can target you and generate inflated bills, by indicating you are roaming on an expensive network. When I left, I believe the most expensive roaming location was still $65 CDN/MB. Try and deal with a carrier explaining that you can't possibly be in Canada and Africa at the same time.
Detectability:
These problems can be hard to detect, if you suspect fraud on you're account due to denial of service or inflated billing, there are only a select few people at the carrier that can find these problems. Diameter protocol also has a specific weakness, in that requests are routed by destination network, but answers are routed back along the path the request took, and may cross 3-4 companies networks. So even if you try and implement source based policy, it's trivially easy to spoof the source when implementing these types of attacks.
Sim Updates:
I'm not totally familiar with this part of the architecture, but if your able to spoof the device into connecting to a rogue network, you may be able to do more nefarious things than just route or block the user plane traffic. You may actually be able to send sim updates, that could brick the device or maybe even run programs on the sim card. Don't hold me to this though, I'm pretty far removed from this part of the spec.
I'm also not totally convinced the internet model has all the answers yet, but I'm happy to see the progress made over the last several years. But I think that's really a different topic, so won't dive into it here.
I think my argument would be progress need to be made on both sides of the equation. Carriers should work towards being able to operate with the least privilege necessary (good luck... I don't have my hopes up seeing the inside), and still protect the privacy and integrity of the information it does need (routing locations, e911 services, record keeping, OTA updates, metadata).
Because if you are named Google, for instance you have a very big safety to know pretty anything from anyone thanks to your spy network everybody use, want to use, buy from you, more and more.
However if you are a "typical citizen" well... You have less and less choices, for instance you want a classic mobile phone? Good luck to find one. And even if you find it did it still can operate? In many developed country we start talking about cutting GSM/2G network leaving only 3G backups and invest in 4G massively...
Do you want a car that you and only you can control? Hum... Ok... You have few options: buy a history car (feasible if we do not need to move too much until we close gas stations transforming them to recharge stations) or create one from scratch and good luck not only for the technical part but also for the needed bureaucracy to get them legally usable on roads.
That's the problem.
Today dictators does not need strong power anymore: why forbid traditional non-connected, non-autonomous cars? You simply stop to produce them. With actual centralization is super easy.
We are NOT in a free market capitalistic society, we are in a kind of planned economy not much different than Soviet Union's one, only instead of a formal dictatorial government, with clear powers and symbols, we have a vague corporatocracy without symbols and formally "constrained" by some kind of "democratic laws". And that's is even worse than classic dictatorship because with them you know your enemy, now you can't really know it and you can't fight an enemy you do not really know nor see.
for instance you want a classic mobile phone? Good luck to find one.
It would take me a 15m walk and $20 to get a basic dumbphone right now. Where do you live that these are hard to find? Or do by "classic", do you mean old? But if so, what's the advantage?
But the classic phones are exactly those that run a closed OS and only support protocols (SMS and phone calls) that can be spied upon by the ISP. If you don't trust Google and Apple, why would you trust a classic phone?
Do you want a car that you and only you can control? Hum... Ok... You have few options: buy a history car (...) or create one from scratch
Infotainment systems have been becoming too invasive, yes, but you still have options. For example, the Nissan Micra comes with a FM/AM radio with MP3 support. No GPS, mobile data, etc.
I live in south France, and I have zero idea where to find something like a Nokia 3310... Now, and not for now, I only see smartphones of various (crappy) kind or phone for old people that mimic classic phone but with a crappiness of '90s-style dot-com era business software...
Of course 3310 was proprietary but simple enough to have fable means for spying me and the level of centralization is far, far less than now. Banally yes, my carrier can spy on me. But only my carrier, not a super-giant multinational data-mining company. And my carrier is subject of my country's law I know a bit about so I have a certain protection. With Android&c devices there are tons of different subjects that can spy on me from any country of the world. I have essentially ZERO protection. And modern devices can do far more than mere audio recording at carrier level...
> Infotainment systems have been becoming too invasive
Oh but I do not really care about infotainment I do care about being unable to switch ABS off when on snow so to have chances to brake the car under my own control, I do care about the ability to switch on the engine without any possible software crash interference (a friend of mine remain locked inside it's top-of-the-line Audi because of a software fault, for instance). I do care about NOT having the ability of power on my car via a smartphone witch means having a remote control device that connect my car with it's vendor and me via internet, all on proprietary software I do not know nor control.
I can't really use a Nissan Micra, I normally use a car for 30-60Km to 300+Km trips, not distances nor environments to be done with a citycar...
And even on simplest citycars: a friend of mine few days ago ask me if I can help because he left light on and shes car's battery is drained. I came, it was evening with not good light and it's raining. I see a big red plate aside a pole of the battery and a black cable on the opposite pole. My cable nipper start sparking fire suddenly when I connect what I'm thinking it was the + pole... With sound imprecations I grab a headlight and see: TWO damn battery poles are without a fucing +/- sign, TWO have black big cables and the damn big red plate have a rigid connection UNDER the plate, black painted as the battery, that attach it to the farthermost* pole. I do not know how an mechanical engineer can be so dumb to design such a thing.
And this year they even released a model with 4G support, reducing the fear of network support shutdown.
Regarding cars, fair enough, I don't even drive, so I don't have a good knowledge of what's available. I do doubt you'll find any car built in the last 30 years without any software, but if it's running fully locally, I don't see how it's that different from any other custom part. If the locks in your friend's car used a simple electrical system, and it broke, he would have been just as stuck.
I talked about infotainment because they often come with Internet connection and such, which is different than just replacing parts with software, but as far as I know, you can still find many models without it.
They are not classic phones, only modern crap with ancient design unfortunately...
On car's being softwareless is not exactly a thing I'm look for at least if software is free, community accessible and well peer reviewed and I can modify it without need super-complex and expensive stuff. The problem "modern" design: for instance ABS really save life in some conditions, like on dry od rain-wet roads. However kill's you on icy roads. In "ancient" cars there is a button to deactivate ABS when you want. Now it disappear. Modern cars have small stereocam and other sensors that try to look around and eventually act on brakes, for instance if you fall asleep driving and you are about to crush into an obstacle the start to shake steering wheel and soft break, if you still not react strongly brake the car, light up 4-way directions etc. Really useful however if you deliberately choose to crush on an object, for instance to avoid a group of children you simply can't. If you deliberately choose to go offroad because you see a big trailer full of kerosene or other combustible about to crash and explode at your side you can't. For that kind of driving aids, for now, we have normally a button to disable them. But it's probably the same story for ABS, at first deactivable after always on.
I here nth time the story that on planes and ships you can deactivate autopilots and pretty any feature because pilots, maritime personnel etc are properly trained while drivers tend to be not much but that's an absurdity the correct, logic, acceptable answer is "add training". Not much difficult. Thinking that a software can be better than a human is a thing we already heard in the past with Microsoft and I think we all agree that's not a good idea...
They are not classic phones, only modern crap with ancient design unfortunately...
How are they different with regards to the stuff we were talking about - specifically, the centralized spying and such?
I've used a Wiko Lubi for a year. It seems to have the architecture as a 3310: it has a basic burned-in OS, without any apps or updates, and has a few basic tools like a calculator and calendar, besides the phone calls and SMSs. In what way does it not fit with a classic model?
If your complaint is the exact same models don't keep getting produced, then sure, but that wasn't the discussion I thought we were having.
Regarding cars I won't comment any further, since I don't know the market well.
They differ in terms of usability and stability, they all feel crappy, uncomfortable, unreliable, made of poor plastic...
If you still have ancient Nokia try only the feeling of their physical keys, the simplicity of their classic menu compared to those modern "devices"...
> no mass-market computing platform has been safer than the flagship hardware/software platforms from Apple and Google
That's a pretty low bar, because, as you say:
> SS7 is insecure
Also...
> operational practices at both the carriers and inside governments rely on those insecurities to get jobs done, and some of those jobs are important and enjoy wide support
Sure, law enforcement is easier then the citizenry isn't free. But the U.S. used to pride itself on being the one place on earth where that argument does not carry the day.
I'm sorry, Ron, but this is a non-sequitur. The premise of my argument is that SS7 is insecure, and that doesn't matter, because modern phones don't rely on SS7 as a secure channel.
They do if you are making a regular phone call. But (and you and I have been down this road several times now) even if you are not using the baseband you have only apple’s word that the rest is secure. No one outside apple can audit their products.
That's pretty obviously hyperbolic, since Apple's OS and firmware code is one of the two most aggressively reverse-engineered codebases in the industry.
iOS 12.1.2 is 1.6 GB. If you could reverse-engineer it at the rate of one byte per second (which I think is wildly optimistic) it would take you 50 years to get through the whole thing. So no, I don't think it's "wildly hyperbolic" to suggest that Apple could hide a backdoor in there somewhere if they wanted to.
Do you think reverse engineering binaries works like the dude in The Matrix who stares at that screen full of characters and sees "blonde, blonde, redhead, blonde"? Like, they just open a hex editor and start at the first byte and read left-to-right?
A screen? They use a screen? I thought they printed the hex dump on that special paper with green and white stripes and holes down the sides and did the disassembly by hand with a 0.5mm mechanical pencil. When did that change? Why didn't anyone tell me?
A lot more seems to be pulled out of a hat here, though. At how many bytes per second do you read and interpret regular high-level source code? How do we know there isn't a secret backdoor in the Linux kernel and all of its drivers?
1.6 GB is the total image size - it's not all code, let alone OS or security-critical code. It's not all reversed from scratch every time a new update appears. It's not like the people looking at it are also staring at reams of ARM assembly. Etc, etc.
> At how many bytes per second do you read and interpret regular high-level source code?
I don't know, but it's not hard to put an upper bound on it. A VT-100 screen worth of code is at most 2000 characters. If it takes me one minute to grok the code on a screen that's (again, at most) 33 characters per second. Even if we take this upper bound as the rate at which one can process object code (which, again, seems wildly optimistic to me) that's still several work-years to go through the iOS image.
> How do we know there isn't a secret backdoor in the Linux kernel and all of its drivers?
We don't. I don't claim that Linux is better. Go back and look at the context of my original comment. It applies to both iOS and Android. Tptacek's response veered the discussion onto iOS.
The problem is not that Apple or Google is evil or incompetent. They may be, but that's not the point. The problem is that these systems are too complicated, and the rewards for breaking them are too high. A back door in to iOS or Android could literally allow someone to rule the world (the President of the United States has been reported to use an unsecured phone). That is not a situation about which it is wise to be complacent.
> 1.6 GB is the total image size - it's not all code, let alone OS or security-critical code.
That is certainly true. But a back door can hide just about anywhere, so you have to look at all of it.
> It's not like the people looking at it are also staring at reams of ARM assembly. Etc, etc.
Yes, I know that. That's not really the point. Even if it were feasible to thoroughly audit the object code (I don't believe it is, but I'll concede it for the sake of argument) this effort is not a coordinated white-hat endeavor. Many of the reverse-engineers are black hats, or state actors not necessarily sympathetic to the needs of U.S. consumers. Even if they do find an issue it is not a slam-dunk that they will responsibly disclose it rather than attempt to exploit it for monetary or political profit.
You write this as if you hadn't kicked this weird subthread off with the argument that iOS was problematic because nobody can audit it. "No one outside Apple can audit their products". That is simultaneously false and inconsistent with the comment you just wrote.
"No one outside Apple can audit their products" is obviously an overstatement. Obviously people can take Apple's products and test them, subject them to fuzzing and reverse-engineering. But no one outside of Apple has access to their design documents and source code, and that limits the extent and effectiveness of those efforts. I probably should have hedged with, "No one outside of Apple can FULLY audit their products" or something like that.
Also, the focus on iOS is a distraction. Apple makes their own silicon. Vulnerabilities are probably more likely to be hiding there than in the software. Modern silicon is very hard to reverse-engineer.
None of your arguments in this thread cohere. You wonder upthread why I'm not volunteering more of my own perspective on this. It's because I don't want to arm you with more weird tangents to pursue. Let's try to break this down:
1. We're talking about SS7 security.
2. Whether or not SS7 is secure, if your phone is compromised and hostile, it's game-over. So it's hard to see how this debate is even relevant.
3. You introduced the argument that Apple and Google phones were untrustworthy because nobody outside Apple (or Google) can audit them. You now say that's an overstatement.
4. When informed that there's in fact a whole cottage industry of people outside Apple who do audit iOS from binaries --- effectively enough, I'll add now, that they routinely find vulnerabilities that Apple missed despite Apple's privileged access to "design documents and source code" --- you express incredulity.
5. You denominate your incredulity in a "bytes per second" rate of reverse engineering based on compiled image sizes.
6. Later, you derive a rate from the number of characters you can fit on a VT100 screen.
7. You later clarify: we have to audit everything, you see: not just the kernel and privileged services but also every pixel in the Apple logo that appears when you boot the phone, and, of course, the security of the weather app.
8. Now we can't trust silicon either. Vulnerabilities are (???) more likely to be hiding there than in software.
This is a kaleidoscope of weird, wrong arguments about security, and, once again, has nothing to do with the thread. If you simultaneously believe all these things, it still doesn't make sense to try to protect phone calls by securing the SS7 network.
What I think you should do is take a simple binary from your desktop OS, download a copy of radare, learn how to use it, and blow your own mind about how even the free, open source reversing tooling works. I'm not being snarky. I think you'll be surprised by how this stuff works.
First of all, thanks for taking the time to write this detailed comment. I really do appreciate it.
Second, let's try to achieve clarity on what it is we're actually disagreeing about here, because I'm not sure we even agree about that.
> no mass-market computing platform has been safer than the flagship hardware/software platforms from Apple and Google
I actually agree with that (though I wonder if leaving Microsoft off this list is actually justified, but that is neither here nor there). What I don't agree with is the implication that Apple and Google flagships are plenty good enough, and that buying phones from Apple or Google is the right answer to all our security concerns.
An analogy: before Fukushima it could be said of boiling water reactors that "no reactor design has been safer". Indeed, even after Fukushima the safety record of BWRs compares very favorably overall (in terms of casualty rates) with almost all other forms of energy. That doesn't mean that we can't do substantially better, or that we shouldn't try. But humans have always had trouble with low-probability high-impact events.
> 1. We're talking about SS7 security.
Well, sort of. The original topic was SS7 security (or the lack thereof). This is a somewhat unfortunate distraction because it led down a rabbit hole: yes, SS7 is insecure. But that doesn't matter much because most sensitive data that is transmitted by a cell phone nowadays is locally encrypted. But it matters some because most != all.
> 2. Whether or not SS7 is secure, if your phone is compromised and hostile, it's game-over. So it's hard to see how this debate is even relevant.
Well, SS7 is a data point. It is an existence proof that modern cell phone contain security flaws because they are the product of a long design process that has a lot of legacy from a time when security was less of a concern. If one such flaw exists, others might as well.
> 3. You introduced the argument that Apple and Google phones were untrustworthy because nobody outside Apple (or Google) can audit them. You now say that's an overstatement.
Yes, I chose my words poorly. A apologize for that.
> 4. When informed that there's in fact a whole cottage industry of people outside Apple who do audit iOS from binaries --- you express incredulity.
No, that's not fair. I am well aware that this industry exists. But I am skeptical that the existence of this industry is sufficient reason to accept the proposition that anyone who owns an Apple or Google flagship need not be further concerned about security, and I think my skepticism can be justified. Even if I'm wrong, I think my position is defensible. And I may well be wrong. That would actually be a good outcome.
> 5. You denominate your incredulity in a "bytes per second" rate of reverse engineering based on compiled image sizes.
I have a lot of reasons to justify my skepticism. I could write a paper (and maybe I should). But HN comments do not lend themselves well to long-form communications so I advanced what I thought would be a compact argument: a quick back-of-the-envelope calculation of the amount of effort required to audit iOS (which is just one component of an iPhone) would reveal that to be infeasible. Now, that calculation may have been way off. That entire argument may have been wrong. But I can always fall back on the fact that to prove that there are no security holes in iOS you would have to solve the halting problem (specifically, you would have to solve the lambda-equivalence problem, which reduces to the halting problem). So I'm pretty sure I'm on solid ground with at least some level of skepticism about the effectiveness of reverse engineering.
> 6... 7... 8...
Yes, because the position that I'm arguing for is that there are a lot of potential problems that the reverse-engineering industry is not well equipped to address no matter how effective its tools are. And furthermore, even if I'm wrong about that, that is still not enough to justify the conclusion that owners of Apple and Google flagships need not be further concerned, because it is not just the abilities of the reverse-engineering industry that matters, but also their motives. And I see a lot of grounds for skepticism about that.
> it still doesn't make sense to try to protect phone calls by securing the SS7 network.
Yes, we agree about this too.
> radare
I was not aware of radare, so thanks for that pointer. It does appear to be a very impressive and comprehensive collection of tools. But one thing that it doesn't have (AFAICT) is advanced AI that automates the process of extracting algorithms and intent from object code. So at the end of the day, you still have humans searching for needles in a 1.6GB haystack.
No, I don't think you have a handle on what reverse engineers are actually looking at when they look at binaries (semi-spoiler: they're not necessarily even looking at assembly instructions), and I don't think explaining it in a comment is going to be nearly as useful as urging you to try it for yourself on a toy problem would be.
Another thing to acquaint yourself with --- orthogonal to the Radare pointer --- is the concept of a "lifter". Or, in another direction, with symbolic execution. Or, still another, with modern decompilation tools.
Instead of addressing the substance of my argument you are instead attacking my familiarity with the particulars of the the tools and techniques used to do contemporary reverse engineering. This is a logical fallacy known as the Courtier's Reply:
Hang on, the substance of your argument is a sort of mysticism - first you claim that an iOS image is a black obelisk fathomless to man and when challenged and essentially forced to concede this is completely inaccurate you fall back on saying your position is generally right and what's more, both right and unfalsifiable because of Archangels Turing & Church.
Neither of these are serious, engaged-with-the-actual-topic sorts of arguments. It's a little rich to be coming back with a grumpy note on the taxonomy of logical fallacies.
"A straw man is a common form of argument and is an informal fallacy based on giving the impression of refuting an opponent's argument, while actually refuting an argument that was not presented by that opponent."
Yes. Really. (Note that there is no mention of obelisks.)
To be perfectly clear (since misunderstandings seem to be running rampant in this thread) I did not literally pull that number out of my hat. That was a figure of speech, meaning that I picked a number that seems not-entirely-unreasonable but which I didn't really give a whole lot of thought to because the exact value is not essential to my argument. No matter what wizardry your tools provide, it seems exceedingly unlikely that you're going to burn through the code at, say, 100 bytes per second without running the risk of missing something. So the question of what you consider a reasonable estimate was deadly serious. If you think my estimate is wrong, then tell me what you think the right answer is and why. The fact that I didn't put a lot of effort into my analysis is not in and of itself a valid criticism. Sometimes people get the right answer for the wrong reasons. (In fact, I am in the middle of preparing a series of lectures on the history of science, and it turns out that it is usually the case that the first time someone gets the right answer that they get it for the wrong reasons!)
No matter what wizardry your tools provide, it seems exceedingly unlikely that you're going to burn through the code at, say, 100 bytes per second without running the risk of missing something.
The fact that you are focused on insisting every byte of the code indicates that you are not yet familiar with the process of how this works.
By way of background, I have done security source code audits of systems on the order of 750,000 lines of code. This was done in a 12 week effort. The approach taken with source code review is possibly different than you think. One part of the approach is to look for patterns of code that are known vulnerability patterns, such as sql injection, or opening a socket. You then trace back to code paths that lead to that to determine how external input (that is user-controlled or attacker-controlled) can be use to trigger those vulnerable pieces of code. Another part of the approach is to look at each of the inputs (or interfaces) to the code to determine how those inputs can influence behavior of the program. One is likely to switch back and forth between these instances.
One key approach in looking at the code is to ask "can user input cause the program to choose one path of a branch or another." Another key approach is to ask "can the user's actions cause a change in one word of the programs memory." From that, an exploit can be crafted.
So you might well now ask "ok so that is source code. Object code is orders of magnitude more difficult." This is not really the case. The tools that 'tptacek mentioned take apparently impenetrable object code and transform it to assembly language (as well as to an intermediate language ESIL), and answer many questions about the static and dynamic nature of the code under inspection. Also, you can get differences of the call graphs from one version to the next. This trick was used to detect a vulnerability resolved by a Windows patch in a common library. It was noted through this tool that there was another use of this library elsewhere in the system that left the vulnerability in. This is without having source.
In fact, for the most serious level of analysis, one should go directly for the binary, as who knows how the source code actually corresponds with what binary actually gets shipped.
And it turns out one can effectively audit code for a language that one is not an expert in. The key elements are "where are the branches" and "what are the call graphs" and "what are the inputs and outputs".
In another thread, you note that you are an expert and that you are involved in the production of a security product. I am as well, having been in the software development business for 52 years, the last 10 in the security field, focusing on software security. And I can testify that these are two different fields of expertise. An expert in software development, even of security products, does not automatically mean that one is an expert in finding security flaws in code.
I've trained many software engineers in software security, and a key part of that training is to note that software engineering builds up programs and solutions by using previously developed abstractions, and making new abstractions that use existing ones. A penetration tester will develop skills in penetrating abstractions. It is a different way of thinking, a different kind of expertise. It is clear from your work that you are excellent at building up abstractions.
There are a couple of ideas that you are missing, I think. One is that evaluating the "rate of burn" through the bytes of a binary blob is a useful way to determine the difficulty of assessing the security. (Nor is it a useful way to evaluate software productivity) It is not necessary to look at every byte. Think of looking at every basic block. I suspect you will get a number that is different by two orders of magnitude than what you are currently thinking.
I did not literally pull that number out of my hat. That was a figure of speech
Fair enough.
No matter what wizardry your tools provide, it seems exceedingly unlikely
Wait no, this is wrong. Experts are telling you this is wrong. Non-experts (say, me) looking at the same stuff can easily see that it's wrong.
If you think my estimate is wrong, then tell me what you think the right answer is and why
I think I saw some tptacek comment in another branch of this thread about 'how many bushels does it take to get to space'. I think he's right that your arguments are in that exact realm of underinformed inarguability. I don't have to prove the halting problem to reasonably say that an airliner, with all its software, is a more reliable and safer form of transportation than a snowboard. Yet you bring this up as some sort of meaningful retort. The onus is not on me to show reasonableness.
So might I gently suggest that you ask someone who has deep experience, well beyond his hat, with actual reversing? Such as 'tptacek, 'cperciva, 'yan, many others here.
I would be happy to get a real data point, but it won't affect my argument unless my estimate is off by orders of magnitude. I don't need an accurate estimate for my argument to hold, just an upper bound that is in the ballpark.
This comment is a moving target. You edited it between when I received a copy via HN Replies and when I came here to respond. At the moment it reads, in its entirety:
> You're not even counting the right thing. How many orders of magnitude off you are isn't even the most important problem with your analysis.
It's more than a little disingenuous of you to say that without saying what the most important problem is. How exactly did you expect me to respond to this?
The previous sentence, which I edited out because I didn't like it, and it didn't make an argument I hadn't already made, was "this is like asking how many bushels it would take to reach orbit".
I edit comments to refine language but never to change their meaning.
Obama used a Blackberry for years before eventually switching to a hardened Galaxy S4 that was so locked down that "it doesn’t take pictures, you can’t text, the phone doesn’t work, you can’t play your music on it" [1]
Trump, on the other hand, does use an iPhone but doesn't have it checked by security experts as often as his aides would like [2]
Imagine a service where you type in a phone number, and it used the GPS location data sold by cellular providers to obtain the physical location of the phone number. It would then autonomously fly a drone near the GPS location of that device and use an onboard cell-site spoofer to intercept data from that device.
Recent Australian laws make it possible to force Australian companies and individuals to compromise software to defeat encryption. Which could be as simple as getting a boutique update delivered to a device that includes a screen recorder or keylogger, and it doesn't necessarily have to be the messaging app that gets compromised. That isn't really a problem unique to Australia or state level actors. I think apple and Android have some protections against screen recording.
It wouldn't have to be the e2e software as the delivery app for the payload or tooling. Regardless it was more about the fact that governments and other actors have options. Phones are just like any other software platform.
Any “boutique update” you’re talking about would require compromising the OS development process, which means that any protections against scene recording would be easy to remove or work around.
That is the hope. But who to trust? Not to mention, some apps already have the permissions for their legitimate use cases, so why not just pick one of those? It may not even require a client update, just requisition of the data from the company. The underlying idea is that smartphones are safer, but they are still software, and your trust points are spread very thin over literally hundreds of people and companies. All of this, blasting across the internet and into dozens of other peoples servers daily. It's hard to consider it secure.
Not an encrypted messaging system that is linked to your phone number and thus to your IMEI number. The base station knows which IMEI it is talking to. It (and the other basestations) know the signal power and can triangulate your location. Any app which has location information (or the many who already sell it) could provide your location. If your IMEI was unknown (your carrier database must be more secure than most) then it could still be statistically found by tracking location and signaled IMEIs.
Meta data is much more powerful than is commonly supposed. Timing attacks are at least as effective as keystroke statistical attacks, which are real. You're better off, if both of you are running VPNs all the time and have lots of background traffic on the same connection.
No it's not. Your phone number is associated with an identifier that is used to talk to cell towers. The (US) cell providers (were) sell(ing) real time location data of their subscribers. No amount of encrypted chat is gonna fix that.
I've always wondered this. When encryption algorithms are broken, we phase them out for new ones. When cell tower protocols have weak encryption we don't seem to do anything about it. I hear that edge and 2g protocols are completely unsafe but there's not even an option in my phone to disable them. What gives?
>"When cell tower protocols have weak encryption we don't seem to do anything about it."
SS7 is not a "cell tower protocol", its an entire protocol stack that allows a Telco central office switch to talk to any other central office switch on any other Telco anywhere in the world, for both copper and cellular subscribers. It run's the entirety of global phone. Class 5 Telco switches are often old Many of these switches have been in service since mid 1970s. Do an image search for "Nortel DMS 500" and you will get an idea of how old and stodgy a lot of this gear is. And it all needs to interoperate seamlessly as governments, emergency services etc all rely on it. In fact with they add IP capability to SS7 - SIGTRAN, they basically forklifted it as is, warts and all. Presumably to err on the side of caution.
A handset only implements a small subset of the SS7 protocol stack the Mobile Application Part(MAP.)
Don’t make phone calls on the teleco layer. Make them on the application later, such as FaceTime voice or Signal. If phone companies won’t secure their networks, lay a secure layer on top of the phone company network.
Actually 5G provides this overhaul, more than it provides speed benefits for customers.
The 4G backend still has a web of trust between operators and their e.g. IP exchange providers. As far as I know, this will change with 5G.
Roaming data confidentiality can then be routed and encrypted until the home operator network, while the associated metadata is accessible for the IP X to provide their services.
The home operator can verify the smartphone is actually in the visited network.
These are all bits and pieces that break up the operator's web of trust.
> Nobody could have envisioned how deeply ingrained cellular technology would become in our society
Am I the only one often peeved by this kind of slop in thought and expression?
Of course somebody could. Some visionaries even did, and not than just Arthur C. Clarke.
The first fully open source phone (RISC-V?[0]) that ditches the 3G chip and goes wifi only using either software defined radio or open source wifi chipset (RISCV again?[1]) will be the only thing to fix this IMO.
We have the means to have secure communication over insecure channels with asymmetric crypto signing+encryption (which doesn't seem broken at least for now), the problem is semi-solved at the software layer -- we now need to solve the privacy/security issue at the layers below software.
Wifis are often absolutely terrible for low latency applications such as voip (buffer bloat). Also that means your phone only works at home and in the office
Yes, but this is only if you subscribe to wifi as it exists today, or near you.
It's becoming increasingly common to rent portable wifi devices from 3G carriers, and if long distance wifi mesh networks ever take off things will be even better.
The idea is to not have your primary mobile computing platform be compromised, if you can prevent it.
Hmm interesting read. In Sweden there is a thing called BankID and basically you can use your mobile device as a universal authenticator. Of course, you need to have the device and enter a 6-digit pin, but I often wondered how dangerous it was to use this so much. And on top of that I know people that used it in local cafes on public WiFi.
I would love to do an examination of communication via BankIDs app to the internet to see what kind of security exists to protect the user. If you can get the person's social number (personnummer) and their 6-digit code, then spoof their device (probably the easier part) you can basically take over their life in Sweden.
BankID should be safe, as the communication can be secured by other means like HTTPS/TLS (assuming it's still an app/applet -- haven't used it in many years).
The article is about the basic communication between the phone and the cell tower, which has other issues.
Pretty much my hope. If it's so easy to snoop on cell traffic my hope then is that the app communication is encrypted using modern standards and airtight. Though I'm sure you're right since these apps are more under the microscope. It's probably fine.
I had my Uber account hacked even though it had SMS 2FA enabled (from Russia as best I could tell). Now maybe there was some flaw in Uber's implementation but I don't trust SMS 2FA. Talk to any competent security researcher - SMS 2FA is only mildly better than no 2FA.
The fact that cellular traffic to this day isn't encrypted properly[1] even though LTE was supposed to should indicate just how horrible cellular providers are at infosec & what happens when they drive security requirements.
No one want safe widespread solutions: we want to being able to spy both for bad and good reasons. The good part is simply justice: telecommunications are vital to anyone, criminals included, to a point that we do not want to limit them. But to catch criminals we still need a bit of surveillance power.
Unfortunately the very same power is interest for criminal itself to spy on their targets, any kind of criminal from the home thief that may like follow you to know when you go on holidays, what kind of safety you have at home (because yes, you post new shiny photos of your new home surveillance system, together with it's plan, photos of you and few technicians during the mounting phase etc), what you have in your house (because you post tons of photos/selfie with relevant "background") to your insurance company that buy with discretion your data from Amazon/Google/Microsoft/Apple, data recorded by voice assistant, smart devices with cameras everywhere, speaker mic of your phone etc (curiously in the past such kind of spying devices were buy, and they are very expensive, by people who want to spy on you. Today you buy them from the people willing to spying on you and also you pay connectivity and electricity form them) to your government that likes to know your political opinion and influence network like ancient est-German STASI or modern NSA/FBI/CIA/* do.
The real "safety" point is not safety itself but balance of power. A knife good to cut a succulent steak is also good to kill someone and perhaps to open a package. A car the same. A phone the same. etc. They are instruments with more or less effectiveness, comfort and power. If they are balanced so anyone have more or less the same power we have no real safety problem. If too few have too much power we have a problem, bigger as fewer and powerful counterparts are.
Unfortunately to proper balance power as a society we need also a certain level of awareness and civic sense distributed among us, because yes knowledge is power. At any level. Today's and not from today's we evolve in a more and more ignorant society with a more and more reduced élite that rules against tons of sheep.
I guess this is supposed to be the part where the masses decide whether the added safety is worth the inconvenience of not having cellphones? Or have we already.
I recently launched https://www.tamarin.us (fake websites + canary credentials) hoping I could capitalize on some of this - but IMO it's a hard sell (and a lot of the salespeople I spoke with kept confirming how hard enterprise security sales are). It will probably be a while before I try to work on another privacy-related product.
Fortunately I'm having a little bit more luck on my current project in the health space.
I like this idea, but I think there are probably two things that are an issue with this:
1 - You remove control of the company from being able to plausibly deny that something happened; you become a second subpeonable party that would disclose something if forced to.
2 - You're not pricing it high enough for a big reseller (like CDW, etc) to want to try to sell it.
Not sure how to fix #1 besides selling/licensing the tech (if the patent issues) to a larger company that can roll this into a larger offering (and out to their exiting customers).
Background; I've worked in Enterprise Software Sales and as part of a SaaS Operations Team.
I think you're exactly right on both points, and licensing is probably the best bet.
The value prop I tried to push for MSP resellers was that it would result in more incident response work for them. Basically offering to white-label the thing.
This single post reveals more about how disconnected from reality corporate/enterprise leadership is incentivized to be, and about the state of bad faith overall w.r.t user's privacy than I think I've seen in a long time.
Big institutions are fundamentally feudal organizations. If you look back at medieval times, some of the lords and dukes were wise men driven by some higher purpose. Others were not.
The tools have changed, but people are the same.
It’s also why regulation is so important. Like feudal lords, the agents of the overlord (ie the auditors) are feared and respected. Compliance tied to compensation or continued employment is something that is cared about.
If you're walking down the street talking on a cell phone on a Summer day without sun protection, there's no debate as to the relative danger of the UV and radio waves you're absorbing. The UV is vastly more dangerous.
You are being extremely charitable with the comparison to the sun. It's probably what, at least 4 orders of magnitude more dangerous to be exposed to the sun than to use a cell phone?
This is a tired response that everyone memorizes but fails to back with facts.
1. There are studies showing some effects besides DNA mutation, such as heating, due to non-ionizing radiation, which could cause a number of health effects.
2. The World Health Organization classified cell phone radiation as a potential carcinogen. The CDC has stated that there is no conclusive evidence one way or the other on whether cell phones cause cancer.
Ah yes, the good old "Group 2B carcinogens" that are "possibly carcinogenic to humans". It includes lead, DDT, dry cleaning (as a job), firefighting (as a job), aloe vera extract, ginkgo extract, and pickled vegetables.
A more dangerous Group 2A includes red meat, "Shift work that involves circadian disruption", and "Very hot beverages (more than 65°C)", according to Wikipedia.
Group 1 contains UV light.
So, walking outside in a sunny day sipping coffee after eating BBQ with kimchi is probably more dangerous than cell phones. Doubly so if you're a firefighter.
I really don't get how the WHO gets to keep those lists without a severe backslash to their image. Since positive proof that something does not cause cancer is nearly impossible, both state basically that "a lot of people think those could cause cancer, none got to see any, be wary".
But walking outside in a sunny day is in a completely different category. Your comparison at the end is severely unbalanced, the Sun alone overwhelms everything else on both sides by a huge margin.
I think the point is that you likely won't do all of those things all day every day; rarely do you spend all day and night in the sun, drinking coffee every hour, and eating red meat 3 times a day.
Your phone is with you at all times of the day, always within 5 feet of your person, which means that if it leads to cancer (which we will likely find out within the next 30 years since American children are now surrounded by phones and tablets from age 5) then it's much more likely that you end up with cancer because of your phone rather than the fact that you were out in the sun for an hour every day.
The first iPhone was released in 2007. Radars have been used since WW2. Of course it's theoretically possible that cell phone radiation causes cancer to everyone but only after being largely ineffectual for 12 years of continuous use, but that's somewhat reaching, IMHO.
The sun is a known carcinogen. The fact that we do not have evidence that cellphones are a carcinogen is evidence that the effect size would be small if it exists.
This is tangential.
Faustian Bargain. I just went through the Baader-Meinhof Phenomenon with that phrase. I saw this thread. Read your comment and just went on about my time. Went to Reddit and started reading some comments there. Saw the same phrase just 2 minutes later! Weird stuf.
In the history of the industry no mass-market computing platform has been safer than the flagship hardware/software platforms from Apple and Google --- on no platform does an exploitable vulnerability cost more to obtain, and no platforms have ever been more capable of establishing secure channels between themselves.
SS7 is insecure. But operational practices at both the carriers and inside governments rely on those insecurities to get jobs done, and some of those jobs are important and enjoy wide support. Anything we do to shore up the security of SS7 will, almost necessarily, include compromises most of us here will find hateful, and we'll be stuck with those compromises for another generation.
Rather than "fixing the potholes" in GSM and SS7, we could instead accept that the cell signaling layer is insecure, and route around those weaknesses with application code that can establish end-to-end secure channels accountable only to their users. That's pretty close to what Apple has already done with SMS text messaging, which opportunistically upgrades to Apple's secure iMessage protocol. We can do even better than that!
That's what we've done with the Internet, where this approach is called "the end to end argument in system design". It worked there and will work just as well for telephony.