Haha, I appreciate your typo in my username. When I created this account a little over two years ago I mistakenly dropped the "l" and didn't realize for a few months (having only copied it from a pw manager.) By that point what was done was done. Years later it makes me a bit happy to think people may actually be reading it as it was intended.
I seriously wonder if Fossil's model of keeping all issues in the clone isn't the way to go - set up a public "FossilHub" or some such thing if it doesn't already exist and then you get a local copy of everything, no worries about providers at all.
Fossil's model is pretty nice in my book, though since it's a much smaller community compared to other SCM options it has received far less polish overall. So, it has some awesome core ideas, but the implementation needs work to be competitive for most users.
They're not dead yet. Sure, they stopped working on their next phone and are searching for a buyer, but they also just pushed out a new Android P beta for the PH-1.
I read the Wikipedia entry and I immediately thought of Sibyl. It's horrifying and quite sad knowing that many people predicted this long ago and we did nothing but rejoice for the literary excellence of their works (e.g. 1984)
I'm wondering this too actually, I run a small business, we collect only the bare minimum of information from our customers but we do have some European customers. I'm ignoring GDPR completely, is there any downside for me? Will they block customers from using my service? Will they sieze my European cloud servers? Or can I safely do nothing as I currently am because I don't reside or have a registered business in Europe?
EU has a history of moralistic bullshit proposals like that stupid cookie law.
Which looked good in the eye of the law-makers (career politicians I should call them) but doesn't really work in the real world.
People get used to accepting the stupid cookie law and it becomes a habit, and in a couple of years the law lost it's meaning (people blindly accept cookie law) and no-one cares about "the great privacy laws of the EU".
This is probably how GDPR will end up, no sane person would have the time to read all the privacy notices and the crappy opt-ins to just order food as fast as possible.
Hey I'm starving I need that food ordered now, here's my location so you can deliver food here, I don't give a rat's ass about your privacy statement and clickady clack are there any more opt-ins to check before I can finally order my food?
Nobody just doing ordinary business things is going to get caught up in the GDPR. The EU are going to go after the local companies first and/or the worst offenders. Just sit back and wait for the case law and best practices to settle down and then decide what to do.
My feeling is it is going to end up like the cookie law, but who knows at this stage.
1: ignore GDPR, you'll probably fly under. And if you dont, fine are scaled for business and people affected, as well as privacy infraction. Encrypt your backups, encrypt PII if you can do it effortlessly, and you're good. If you are not using emails except for checking double inscription, encrypt them too, the entropy is low BUT this is better than nothing .
2: If you have some time and money to spend to try to improve your services: self-report. A public agent will point you the weakness of your data processing.
How? I'm not in their country, their laws don't apply to me or my business in any way, shape or form. They could perhaps argue I do business there, but that still doesn't give them anything to press charges against. Best they could do is block my site as far as I can guess...
I'm going to call bullshit unless you can provide a source that any overseas government can levy a fine for whatever reason and then "trash my credit".
If you have a lawful basis for collecting the information, you're only passing it along to others as necessary to provide your service to your customers, the customers have clearly consented, and you employ reasonable protection of that data... it's extremely unlikely that you're in violation.
And if you were, they'd come to you first with a warning (at least based on past behavior). They're not going to seize assets unless you seriously provoke them.
Why can't I ignore it? I have european customers but they chose to sign up with a business in a foreign jurisdiction where their laws don't apply. If it's a problem, the EU can feel free to block my sites, but I can't see how it's negligent to not comply with laws that don't apply in my country.
I don't comply with laws from many other jurisdictions either. Should I start applying censorship laws for China and Saudi Arabia too? Why should the EU be special?
There are thousands of laws on the books where nothing happens when you ignore them. Sure it is possible that the EU will pick some obscure small company doing boring business things outside of the EU to make a test case out of, but how likely is this?
Anyone not up to shady activity can afford to wait for the case law and best practices to settle before doing anything.
I'm pretty sure that's incorrect at least today, it's possible to skip through the initial setup on a stock Android device without adding a Google account or accepting a ToS.
If there is, they don't make it obvious. Whenever I've tried setting up a stock Android phone, I've looked for a way to do so without adding a Google account, but found no such option.
Perhaps it's possible to do so by pressing or holding some obscure sequence of buttons, but in that case it is reasonable to argue that a 'hidden' option isn't really an option at all. After all, you can't hide microscopic text on a paper contract and expect signees to be bound by it.
There may be stock Android phones out there that do provide a clear option to not use a Google account, but there are certainly many phones that do not.
I am using a chinese noname Android phone without a Google Account. It is somewhat useable even without Internet connection and without SIM card. For example, I can use a camera, radio, music player, a dictionary or offline maps.
You can use third party app repositories like the FOSS-only F-Droid, or even simply download apps directly from individual creators if they release the apk.
I may be wrong, but from what I remember, it didn't show add because Google refused to allow access to the official API in order to show ads. It kind of forced Microsoft's hand.
Are iOS Authenticator apps actually calculating OTPs on the Secure Element? Is there a way to execute arbitrary code on it? If not, they have to pull the keys off to the main CPU where they're open to attack like anything else. Still secured as private app data, still mostly protected, but an attacker with a jailbreak could still dump them.
I know for a fact I can dump Google Authenticator keys from my Android device with root as I'm able to back it up and move it to another device. Theoretically on most Android devices even there's a secure enclave available that could do it, yet I haven't seen any apps use it.
Most of the benefit of OTPs really comes from approving on a secondary device rather than protecting the keys to an absolute degree though, so this is probably of little concern to most users. In fact it may provide a convenience benefit, I like being able to backup and move my keys, without that I probably wouldn't use 2FA at all.
Using the secure enclave, you (as a developer) can have it generate a private key you'll never be able to get and then ask it to sign / encrypt (symmetrically) arbitrary things for you.
Given that TOTP (one of the more common phone OTP methods, used by Google Authenticator) uses a symmetric key, it seems unlikely it’s being stored in the Secure Enclave
It may just require an extra step. My understanding of TOTP is that it's the key data (typically a string represented by a QR code) and a time offset that is used to generate the OTP. If the only thing stored on disk is the code encrypted by the secure enclave's key, and the only way the decrypted code is in memory at runtime is if it's decrypted by the secure enclave's key, then that still offers protection against some attack vectors.
You (as an attacker) could then recover the key if you had full control of the OS and could trick the user into authenticating so the secure enclave decrypts the key, but would presumably have more trouble if you (as as attacker) simply stole the device.
You as an attacker would arguably have just as much trouble simply unlocking the device, you'd be left with the same amount of protection approximately. As long as you have disk encryption, the security margin would be about the same. A marginal improvement at best.
>AFAIK that means it'll take more than a jailbreak to get to them, although I don't know if OTP apps are using that capability or not.
sure, you wouldn't be able to extract the keys, but what's preventing you from generating thousands of codes and extracting those instead? since they're time based, you could easily generate lots of them for a long time into the future (eg. 10 per day for the next 5 years). that should afford you plenty of opportunities to do a login attempt.
This can't be used directly for generating OTP tokens (see the other comments), but what would stop you with a normal key on the secure enclave is that you can require the enclave itself requires a higher level of authentication (facial scan match, fingerprint scan) to perform those key operations.
Yes, that's great for asymmetric stuff, but we're talking about TOTP, which uses a fixed symmetric key and a hashing algorithm. Unless you can run arbitrary code on the secure element, like you can with Intel and Qualcomm stuff it can't be done and even if it can be, it'd be a significant effort investment for what's probably a negligible secure it gain in practice. Still, I'd be pretty impressed if any apps did so.
I don't think theres much value in "something you have" so much as there's value in "approving this authentication via another device". Adding an additional device to compromise running an entirely different platform makes attacks much more difficult, even if we're talking about a poorly secured Windows machine and outdated Android phone. Enough to make you effectively invulnerable to almost all non-targeted attacks which will only breach one side or the other.
Android has literally had this feature for ages. By default plugging your device in puts it in "charging only" mode and you have to tap a notification and explicitly select MTP or PTP mode before it even attempts to talk to the computer.
On Android no data connection is made, when a connection on the data pins is detected one is offered, but as far as I can tell no device is even detected by the OS until after picking a mode, nothing to attack.
Let me know if I'm missing something, but I get no change in Device Manager/dmesg when plugging my phone in, indicating no data connection to me. It would appear the entire data connection is disabled until a mode is picked.
This is the normal behavior in iOS, and has been for years.
What Apple is doing is _additionally_ disabling the USB port on an even lower level. Currently, the port could still exchange data if it were tricked or hacked, but disabling the port on a controller level will prevent accessing the device entirely.
Or at least that's the theory. Since it can obviously be reconnected after entering a passcode, there are conceivably ways to get it to open up. But that will have to be tested.
Correct me if I'm wrong as I don't have any recent experience with them, but don't iOS devices expose an authentication interface even without unlocked interaction from the user?
Apple's own guide doesn't seem to indicate any form of interaction is needed to enable that interface. That interface is what's attacked by devices like GreyKey if I'm not mistaken. Android devices when not manually unlocked and toggled present no such interface.
That's different. iOS has the same "Trust this computer?" prompt before it attempts to pair. But in both cases the phone still has a data connection, which leaves it vulnerable to any kind of security compromise (such as GreyKey), plus if the computer has a "lockdown record" it gets to skip that "Trust this computer?" prompt anyway (a lockdown record is a thing a computer gets after pairing with the phone that lets the computer prove it's already trusted to talk to the device).
But what this article is talking about is after 7 days of not being unlocked, iOS 11.4 won't even enable the data channel on USB, which means computers with lockdown records still can't talk to it, and presumably devices like GreyKey can't compromise the device.
On Android no such pairing system exists (except for USB debugging), you have to explicitly allow it every time you want to mount the device and no device shows up whatsoever, the USB data connection is disabled until after you pick a choice other than charging. You must unlock the phone and explicitly enabled the connection, I'd argue that's still better than the iOS implementation where an authentication interface is still available for 7 days even when locked.