> ...when you do use the biometric options we're about to get into, you're still going to need [a pin] on your phone anyway. For example, every time you hard-reboot an iPhone with Touch ID you need to enter the PIN
This is what has been missing from every discussion of this issue that I've seen so far.
The face scan isn't "insecure" even if you're worried about border searches. Just turn off your phone when you get in the security line! Pin will be required on start.
Pin is also required when plugging into a new computer.
The rest of the time when you're going about your daily life, and are not worried about a government agent spoofing your face or pointing the phone at your face, you can use this nice feature.
Most people will be _less_ secure without it. They don't want to punch a pin every time they want to tap their phone to pay for coffee. So without the face scan feature, they will opt for no security at all.
The reboot/plug-in pin requirements change the discussion quite a bit, but are usually ignored, seemingly so bloggers can state the obvious "but someone can spoof your face!"
> The face scan isn't "insecure" even if you're worried about border searches. Just turn off your phone when you get in the security line! Pin will be required on start.
As far as border searches go, border officers have the authority to request your PIN just as they have the authority to request your thumbprint/faceprint/etc. If you don't give it to them, you can be detained and/or your phone confiscated [1]. Rebooting your phone won't help.
They can request your PIN all they want, but you are not obligated to provide it. They can temporarily detain you but not indefinitely, and the EFF is challenging their authority to even do that. [0]
Personally, I would refuse to unlock my phone. My privacy and upholding civil liberties is worth being detained for a few hours (or even days).
Also, the border is 100 miles from the Mexico/Canada borders and 100 miles from the shore. So, if you're concerned it's not when you're entering or leaving the country. It's any time you're in LA, NY, DC, SF, Huston or Detroit. Or any of the other thousands of miles of border.
Common misconception: If you have crossed the boarder, then within 100 miles of it they can search you.
Of course, how you prove you didn't cross the boarder is an open question. But you can in fact refuse the search on that claim. I suppose they may detain you then.
I think you're looking for the phrase `probable cause`. If the officer has probable cause to think you crossed the border, they can search you. I'm not a lawyer, but i think they can search you anyway. It just won't be admissible in court.
edit
to clarify, US law pushes police right up to the edge. there is a preference for false positives rather than false negatives. it's more important to catch all of the criminals than it is to inconvenience some innocent people. The risk of letting one criminal go is much more than cost of detaining a doctor for a couple of hours.
Now, we're in this weird time where that doctor can have 20 years of hippa protected medical records in their pocket that they might be forced to disclose. Historically, that doctor may have some records in a briefcase, but not tens of thousands.
They can even nab American citizens for drug possession:
"One of the people arrested was a U.S. citizen who fled the checkpoint and led the police on a five-mile chase. The unnamed man was arrested and charged with three felonies, including reckless driving, possessing a controlled substance, and endangering the welfare of a minor."
I read that they are not able to force you to provide a pin.
But even if you're right, this doesn't change my argument. Most people are less secure most of the time in the absence of biometric authentication. Because without it, they will opt for zero security. You can always use the pin in addition, for whatever that's worth.
You know what would be really neat? A different, restricted/camouflaged unlock when you make a slight facial expression that would probably go unnoticed.
regular face: regular unlock
right eyebrow raised a tiny bit: hide my sensitive stuff from a casual search*
*and after a few minutes, if I don't deactivate it, start deleting.
On top of this, iOS 11 introduced an emergency mode that is enabled by tapping the standby button five times rapidly. It is easy to do discretely in your pocket, and it locks your phone by requiring your passcode to be entered again.
One thing to note: It's at least five times, not exactly. Just click that side button until your phone vibrates a bit and it's locked. I mainly see myself using this feature in high stress situations, like going through airport security or being pulled over, so not needing to count is a small plus.
It's also worth pointing out that iOS already has an option to erase your phone when the PIN is entered incorrectly a certain number of times. A lot of companies enforce this if you receive your work email on your device (eight times in my case). So if you are in a situation where a PIN is being demanded there's still a way to keep secure.
Your phone can be easily restored from an iCloud backup once you've got access to wifi. You can always restore from another device or backup if there's data you don't want seen if somehow you can be compelled to do a restore.
I haven't tried since installing itunes 12.7 (the version that removes the App Store etc), but reports I've read suggest it still backs up everything with the notable exception of apps. These have to re-download from the appstore on the phone iteself presumably now. I'm assuming though that all app data is preserved (one would hope so!). Happy to be corrected if this proves not the case.
From macdailynews:
"iTunes backups are still there for iOS devices, but performing a restore won’t transfer your apps from your Mac, but will instead download them over the Internet from Apple, which is, of course, likely to be slower."
If a judge issues a warrant allowing the police to compel the use of biometrics, can a person use the wrong finger? Courts can't compel someone enter their PIN. So can a court compel not only biometric "use" but the correct biometric?
My Android forces PIN entry after 5 wrong fingerprints.
If you refuse to tell them which finger you used, and comply by only pressing whichever finger they ask you to press, they will have a 10% chance of getting in each time.
So if you chose only 1 finger and you chose it randomly, they will have a 50% chance of getting in after 5 attempts.
Since I use fingerprint authentication for the vast majority of authentications with my iPhone, I set the actual pin to a complex alphanumeric password. I only have to enter it a few times a week, so it’s worth the hassle. In this sense fingerprint authentication has significantly improved the security of my phone, because it’s so much more difficult to shoulder surf that alphanumeric password that it is a 4/6 digit numeric pin. Biometric authentication definitely seems like a win to me.
I don't want to be that 'if you've got nothing to hide then' guy but why are people so worried about what border agents in particular will see on their cell phone?
I am not saying that I wouldn't mind at all if my phone was searched. But I can't think of anything in particular that I would be concerned about if it was. Sure in theory the agent could remember some personal information and come back later and use that info or pass it to someone for some nefarious purpose. But that's a pretty small chance event. It's more or less the equivalent of thinking you will get sick because you find a hair in your food at the restaurant. You don't like it and you send it back or demand a refund but the actual harm is more mental in nature.
There's lots of stuff I'd rather not be seen by strangers. Love notes to/from my wife, love notes to/from my mistress, photos of myself or others that I'd rather not be seen by strangers, financial information (through online banking apps, etc), just to name a few.
How do I know the agent isn't downloading the naked pictures of my wife? (There are lots of reasons to not keep such pictures on my phone, but "Because border agents may see them" should not be one of them)
And there's just the futility of it -- if I really had something to hide from the government, I wouldn't keep it on my phone (or if I did, I'd keep it hidden).
the actual harm is more mental in nature.
That doesn't make it any less real - the government should not make me feel violated.
Bikkannavar says he was detained by US Customs and Border Patrol and
pressured to give the CBP agents his phone and access PIN. Since the phone
was issued by NASA, it may have contained sensitive material that wasn’t
supposed to be shared. Bikkannavar’s phone was returned to him after it was
searched by CBP, but he doesn’t know exactly what information officials
might have taken from the device.
Once he unlocked his phone, they were gone with it for 40 minutes. Plenty of time to copy the device contents and peruse at their leisure.
Depends on how difficult it is to produce an automated system whereby one needs only plug an unlocked phone into a computer in order to hoover up all locally-stored messages, contacts, saved passwords, synced browser histories, etc. The agent themselves doesn't need to care, they can just be instructed to "unlock the phone, plug it in to this computer, wait until the progress bar finishes, hand phone back". Are you on Github? Is your Github password saved in your browser? Do you have commit access to any marginally important projects? God only knows who has commit access to those projects now. Considering this is HN, I imagine this is an attack vector that a lot of people here are particularly concerned about. Given how much code we use that's written by others, it ought to concern the rest of us as well. :)
Not sure why you have been downvoted, since it is a valid question and you phrased it in a productive way.
I think the reason people focus on border searching so much is that it's presumed to be one of the easiest entry points for the government to spy on you.
Many security researchers, such as people who work on Tor and related technologies, have had their electronics searched and sometimes seized at the border. That makes it particularly relevant to the HN crowd.
For me, I seriously doubt the government would ever be interested in my data. But imagine that the CEO of my company is suspected of some kind of crime. Perhaps when I am at the border they try to image my phone, in order to get access to things which may allow them to control other company assets (servers, etc), which would get them closer to their target.
I doubt it would ever happen to me, but I'd rather take precautions. Being searched at the border seems much more likely to me than, say, the NSA/FBI targeting me remotely with malware.
My Canadian friend (woman) made the mistake of making a connecting flight in the States on her way to visit me in Mexico.
The border agent accused her of prostitution. He wanted to get into her phone to see her latest Facebook Messenger and Tinder correspondence. She let him because, of course, it's easier than canceling her entire vacation plans.
Now, the tips in this thread wouldn't have helped her. But do you see how it's not just "I don't have anything to hide?" What about not letting some border agent power trip all over you? Reading your Tinder messages for fucks sake?
What if they automate the search and copy all your data for further mining and save all that info to see if you're not "desirable" for the "people in power"?
It is not only you. It exposes everybody to be blackmailed by the government or who every gets hold of the information. Judges, politicians, businessmen, scientists, journalists, etc, etc. It weakens the democratic fabric.
I'm not sure if it's relevant but Dubai and UAE have been known to put people in prison for quite unbelievable reasons often as they were transferring through the airport.
One such instance included a British guy who had shared a Facebook post recommending giving aid to refugees, Dubai police put him in jail over it - that was over a year ago and I believe he's still there. One instance included a microscopic amount of cannabis found on the sole of a passengers shoe, he got almost 4 years in prison.
There are many many other instances, if locking your phone stops them from even chancing it, it might be worthwhile.
What I worry about is Google Authenticator codes for several financial institutions, getting vacuumed out of the phone with everything else and stored on some insufficiently-secured database.
One phrase I've used with success in the past is that I have secure governmental cryptographic codes owned by someone other than myself in my laptop/bag/iphone and am not allowed under law to disclose them to unknown parties such as yourself.
Most first level TSA/DHS people measure the cost/benefit of my assertion and back off.
Just because you are innocent and have nothing to hide, it does not mean they cannot still use the evidence you provide to convict you. Basically, it comes back to the same reason that many lawyers generally advise you to never talk to the police (1). IANAL.
Do you have any work data or email on your device? What is your employer's policy about granting 3rd parties access to such data? Could you be fired for doing so?
The issue is that they can compel you to unlock the phone and then make an image of the phone's contents which would allow them to clone it to another phone. They would then have full access to everything on your phone including any linked accounts and cloud services.
What's on your cellphone today; who knows what it will be tomorrow.
I'd personally much rather the State fundamentally respected a right to private and family life, much like article 8 of the ECHR, but fat chance of seeing that in the USA any time soon.
You don't even have to do that anymore. With the new OS that will be standard on the new iPhones, you can just tap the power button 5 times to put it into a forced-lock mode. It'll require the use of your passcode before Face ID or Touch ID can be activated.
I really liked this write-up because it focused on the practicality of the various security mechanisms. Most articles I see usually have a blanket statement like "All biometric security mechanisms are bad!". I think this article does a good job comparing the various logins and describing the pros and cons for different people. Specifically, I appreciate the author calling out when people bring up the "What if" edge-cases, where the correct response is you likely have much bigger problems at that point than the security level of your phone.
Specifically, getting more people to have better security on their devices is a very difficult User Experience problem, and Apple's pretty good at solving these kinds of problems.
TouchID moved the ball forward quite a bit, and FaceID will probably go even further.
Obviously neither provide ultimate security, but Apple is in a strategic advantage since they make the hardware and software to make the barn walls and roof super secure, but it does nothing if the front door is left open.
Before TouchID, I set my passcode to 0000 with a four-hour window where I didn't have to reenter it. I only had one set at all because Find My Friends refused to keep me logged in unless I had a passcode set.
With TouchID, I have a complex passcode that I have to enter a couple times a week. It's less secure than some hypothetical setup where I have a complex passcode I have to enter every time I unlock the phone, but it's far more secure than what I was actually doing before.
My android phone forces me to re-enter my passcode every 24 hours.
I think that strikes a nice security median. If someone does get procession of my phone, I only need to stall for less than 24 hours.
The rest of the time, the fingerprint scanner works near perfectly. It's actually faster to use the fingerprint scanner than the standard slide to unlock, which is all I ever had setup on my previous phones.
iOS does the same after 48 hours of not being unlocked or re-authorized. I agree that this seems like a decent security compromise. Anyone with physical access to your phone for more than 48 hours has other vectors to pursue that are far easier than just trying to guess your password.
That's why you temporarily disable biometric authentication before you go through customs. On iOS, this is as easy as turning the device off, and in iOS 11 it just takes five quick presses of the sleep button.
That's the point, "constitutionally" while they lock you up for hours/days on end to obtain the warrant needed to give up your password unless you are willing to stay locked up.
Many border agents, it seems, have the “if you have nothing to hide” mentality. So if you’re refusing to unlock your phone, clearly you’re hiding something.
Re: the pushback the author got on Twitter; I believe in skepticism towards corporations and marketing claims, but the level of cynicism online towards any new tech idea or product seems a bit out of hand. There's a certain trend, on Twitter especially, of people racing to prove they're either more woke or smarter than the teams of people behind things that are yet to even be released. I mean a "wait and see" attitude wrt the actual effectiveness is good, but I don't get why we need to concoct extreme hypotheticals here suggesting Apple is somehow irresponsible for adding an optional feature.
> There's a certain trend, on Twitter especially, of people racing to prove they're either more woke or smarter than the teams of people behind things that are yet to even be released.
This has been a trend of a vocal minority on the internet for as long as I've been connected to it. Remember "No wireless. Less space than a nomad. Lame"?
"attention awareness" is Apples own description. How long before it's turned against the user to confirm "Advertisement Awareness" or EULA Awareness, etc..?
Given that the authentication methods are "differently secure," wouldn't it be good if we were offered the option to combine them and require both for unlock? I would love to use Face ID + PIN or Touch ID + PIN for better security.
I want this, and also the ability to secure different areas of my phone.
I want to be able to set touch or PINs for certain apps, so that I can have multi level security. Why is there so much emphasis on one master password/touch ID/face ID instead of having multiple security checks?
All of my banking apps, among others (e.g. password manager), include options for both pin and touchID auth. Do we not already have what you’re asking for?
I think he's requesting it on the system level. For example, if iOS allowed you to force PIN+TID when opening a specific (not necessarily secure) app for the first time after an unlock. Intended for apps that don't necessarily have security built in already.
This is especially useful for when the app developer doesn't think there is a reason for them to develop such a system. For example, I don't want my son playing a zombie game, but I myself play it often. I could then lock him out of said game, but still give him my phone to use.
I've been wondering that too. If possible, it'd be nice to combine username, second-factor, and password, as they all perform different functions that people often ambiguate:
- Your username is who you think you are.
- Your second-factor (faceprint, thumbprint, keyfob) is who you claim to be.
Large public systems could do away with usernames if they assigned random, unique passwords to users out of a large keyspace. Usernames allow the keyspace to be smaller and the keys have less entropy making user chosen keys possible. But then we add key minimum entropy requirements (sometimes using only length as a proxy) in order to combat the freedom given.
Would be interesting to see whether username-less public systems suffer fewer intrusions. Some users would record their passwords in unsafe places, but they couldn't reuse a password eliminating password stuffing as an attack surface.
Of course, if you allow password resets you'd still have a kind of "backup" username presuming a person's email address could be used for the reset. But that wouldn't significantly weaken the security, arguably.
I also believe that TouchID + FaceID would be a good combo as well for the vast majority of users. Both technologies seem to be relatively foolproof and user-friendly. Upping the ante from a security perspective would be a good thing. Unfortunately, that won't come to fruition.
That’s what it does right now - if Touch ID or face id fails you still have your password to fall back on. I think it may do this automatically if the touch sensor dies but you can always force it with the 5x quick taps on the power button.
It is, but GP is saying if the auto method is FaceID + password and FaceID doesn't work... then what? Clearly just the password isn't enough to get you into the phone.
We can have two options for unlock: (Face ID + PIN) OR (Alphanumeric Passphrase). The former for daily use and the latter as backup or for system restart.
I'm not sure that Touch ID can come back in its current form. Phil stood on stage and told us Touch ID is 50,000 secure and Face ID is 1,000,000 secure.
I think the only way for Touch ID to come back is for it to cover the whole display, so it can authenticate every touch.
Agreed. If you could make it so every single tap was scanned, you could make the phone so secure, yet so simple to use.
The phone could constantly monitor both Face ID and full-screen Touch ID at the same time, and if it detects anything funky going down it can panic and lock. Would close down attacks where someone grabs your phone off you while it's unlocked and quickly disables security.
I highly doubt this would be possible for a long time, but it would be interesting if Apple could equip the entire display a TouchID sensor. If you could scan all of our fingertips, then you could theoretically "sign" every interaction with the display.
Exactly what I'd like as well. I'd like to be able to use one of two methods to unlock: a fingerprint and a modest PIN (with lock-out after a couple of tries), and a long passphrase (also required at boot).
The problem is that the PIN is always dominant over Face ID or Touch ID. Face ID doesn't work? Use your PIN. What happens if you use local 2FA and your Face ID doesn't work? You can't enter the PIN, because that would effectively render 2FA useless. This would probably require some master PIN, which makes things more complex, and increased complexity correlates which reduced security.
It would be interesting if we could specify a particular face pattern to unlock the phone. Imagine you set up your phone to open only if you smile, now if someone picks up your phone and try to unlock it by pointing it at your face, not smiling would be easier than closing your eyes or looking away. Not even mentioning the health benefit of just smiling :)
It would be awkward to smile/pose before/after a funeral, just because I need to call my mum or check my email...
That being said, I do think that there could be a legitimate use case here. One could set up a particular "emotion" (a face pattern) associated with someone forcing them to unlock a phone using their face. I mean, if someone pulls a gun or a knife on me, I'll probably just do as they say and look at the phone, rather than risk an additional hole in my body. But unlocking a phone and sending a distress call is something I could live with.
y, if someone has a knife or gun I would just give them what they want and worry about a distress call after you're safe instead of getting fancy trying to activate an 'I'm being mugged' feature.
I think that could be a nice feature but would add stress to the situation when you should just be focussed on staying alive trying to remember how to do that special thing or enter an alternate code.
I'm thinking of a situation when the phone, for example, is not enough for them. What if the don't leave you alone after that? What if you're a girl and they are going to try to rape you? What I'm thinking of is not about a "fancy" help-me-I'm-being-mugged "duck-face" pose, but actually a face pattern which, simple enough, could offer assistance in a difficult situation. What if they take away the phone and I'm left with no change of calling for an ambulance?
You are right, though, that this requires some "friction" and probably some self control.
Huh. Taking that a step further, instead of Face ID being a (probably) static shot of your face, what if it were a sequence? For example, if you could set your "code" to be smiling and then frowning? Or, given it can detect eye location ("awareness"), looking at different corners of the phone in a particular sequence?
Another approach might be to have your eyes follow a pattern. Either a static pattern which is always the same, or a dynamic pattern with a challenge each time.
I seem to recall a video-based one that did something like this - it would tell you to make a specific facial posture to unlock the device, maybe Google's?
This is a really well-written, considered view of the trade-offs for using different options for security. I learned a lot from reading, and the plain language discussion of the topic allows most any reader to better understand the trade-offs present for each option.
Much appreciated to the original author - it takes a good deal of time and effort to write something that lucid. Thanks.
> a thread emerged about abusive spouses. Now if I'm honest, I didn't see that angle coming and it made me curious - what is the angle? I mean how does Face ID pose a greater threat to victims of domestic violence than the previous auth models?
If someone has the PIN and the phone, they can get in without the person (without their biometrics.) Fingerprints and Face recognition increase the chances that an abusive spouse needs the other person every time they access the phone.
Parents who have their childrens passwords are in the same situation -- they can't snoop on their kids biometrically secured phone (like reading a kids diary in the old days.) They have to have the kids open the phone, which means the kids know that it's happening.
I'm not sure, but I know with my spouse and I, we always put each others Biometrics into each other's devices (I add her fingerprints to my phone and vice versa), share a lastpass account so we know each other's passwords, add our email accounts to each other's devices so we can always look at each others email, etc. If you are married to someone and don't trust them enough to do the same I have to question the foundation the marriage is built on. As for kids, we just don't allow them to have a phone or other device until they are old enough to buy it with their own money, which so far has never happened until they are nearly 18 and getting ready to leave for college anyway.
Not trying to tell you how to live your life, but being an open book to the other isn‘t very trusting. If your trust is build around being able to spy on the other, maybe you aren‘t trusting each other that much. My wife and I both have our own separate phones and computers without each other being able to access it and I trust her not to do anything that goes against our interests and vice versa. That‘s what trust is, not having to know what the other does and knowing they are doing the right thing.
I also let my spouse unlock my phone. It's not so much "being an open book" as it is "I trust her not to snoop, and sometimes it's convenient that she can open a map on my phone." That meets your definition of trust: not having to know what she does because I know she'll do the right thing.
Very fair point. In fact, my wife actually knows my phones passcode for the exact same reason. „Open book“ was definitely the wrong phrasing here, I wasn‘t trying to say that you have to keep everything secret from your spouse. Just that the exact opposite of that also strikes me as very distrusting.
There are also lots of people in abusive (or just "problematic") relationships for whom some privacy around their phone can be extremely valuable. That's why I'd prefer the societal norm to be NOT sharing credentials. Which of course doesn't mean that all people have to follow that.
I just got tired of having to unlock my phone every time I asked my boyfriend to send a message while I was driving, change what music was playing, etc., so I enrolled his fingerprint. I imagine most people that do this are thinking on the same lines... I would say that this is an indication of trust, as I trust him not to go snooping around on my phone.
Its kind of perverse to jump to spying, don't you think? I'm not spying on my wife or she on me. Even though I have her email on my phone, in the past year I think I've only been in there twice, both times when she asked me to look for something for her. I know she doesn't look at mine either because she just doesn't care.
I don't think it's completely insane, but I think the parent comment overestimates the proportion of marriages that have that kind of trust. If people didn't get married without 100% trust then most people wouldn't be married, and evidence is that people prefer imperfect relationships to loneliness.
It's also useful to have a norm against doing these things even if you have 100% trust, because it's probably the case that people are too readily fooled (by themselves or a manipulative partner) into thinking they have 100 % trust when they actually don't. So having a norm against sharing everything protects the people who need it, even if you're not among those people.
Would be interesting to enable voice authentication contemporaneous with face scanning to make sure the lipreading matched the utterance matches the voiceprint matches the expected face. Bonus points that a vocal channel could be used to detect duress (especially if accompanied by, say, raised eyebrows) and either require further authentication (passphrase entry) or a "false unlock" to reveal only a nearly factory fresh app and data underlying. Could also potentially send a notification to friends that your phone had just been unlocked under duress. Bonus points for in parallel hard-scrubbing the underlying true data while displaying the false boring phone interface.
What I'd like to see is this tied into an identity system, such that the ring (or other very-hard-to-misplace, but replaceable and discardable) token is not itself an identity, but rather an access token to an identity store which can present any given identity to any given system.
That might be a consistent identity across multiple sessions or unique identities on each session. The identity might be tied to some central certifying agency (e.g., a motor vehicles department or national pensions fund), or not.
There are several elements of this which I'd like to see developed further, including how keys might be reconstructed or recovered using a quorum system of trusted sources (divide your key into pieces, share those amongst friends, family, or some local authority, such that key loss need not equal data loss), and possibly via law enforcement.
I'm also looking at the possibility of a public ledger system which might allow for both workfactor requirements and public disclosure of keys being revealed. This may be a viable application of crypto, though I'm not entirely sure of this.
(The feature might also be optional -- you could take the risk of key loss, or allow for recovery. But the present situation with PKI of losing access to all previously-encrypted data in the event of key loss would be mitigated.)
There's also the requirement for devices to have support for near-field readers. I'm told this is alreadly largely a reality, though my reading of specs for various mobile devices suggests otherwise.
The biggest challenges through all of this are not the technology itself, but the adoption, requirement, and enforcement of standards, including availability of tokens at low or no end-user price. Trust of the information ecosystem overall might be a suitable incentive for this to happen.
So, someone steals the NFC ring and then own the phone? Ring + heat detection of PIN tap pattern will end up giving a false sense of 2FA. (not sure how the ring auths on being worn, didnt see it on the website).
Personally, I'd like to see identity tied to my smartwatch, with authentication happening via capacitive coupling + Bluetooth.
The way I'm envisioning it:
1. Physically touch the object you want to authenticate to. (E.g. Computer, payment terminal, smart lock, etc.) Watch uses capacitive coupling to bootstrap a Bluetooth connection to that device.
2. Device requests authentication & authorization from Watch.
3. Watch either authenticates you instantly (for lower security applications), or requests you to confirm the transaction with your fingerprint/face/PIN (higher security applications)
This method would also enable a lot of other neat tricks to further increase security, like checking your heart rate and refusing to authenticate if you're asleep or the watch isn't strapped to your wrist anymore, requiring additional authentication methods to unlock your watch after you take it off, displaying the dollar amount for monetary transactions on your watch when asking for approval, etc.
A watch or bracelet could also work, of course. Even a neck pendant if that's your thing. The point is physical, on your person, and crypto based on near field.
The problem with longer ranges, even just a few cm, is the prospect for snooping or triggerring unintended authentications. My preference would be mm range.
> The problem with longer ranges, even just a few cm, is the prospect for snooping or triggerring unintended authentications.
The advantage of using a watch (or another device with a built-in screen) is that it avoids exactly that problem. When you authenticate, you have to physically press a button on the device, and the screen tells you precisely what it is you're authorizing.
Using capacitive coupling as the initial communication channel would also help with that, since you'd have to actually touch the object you want to authenticate to with your bare skin.
Both good points. I've been thinking of some contact / button interaction as well, though that's a toss between keeping the device as physically and electrically simple as possible, vs. some level of interaction. A circuit-completion button on a ring might work, which wouldn't require, say, an additional battery. Though battery life would quite likely be years.
Do you have any refs on capacitive coupling? Is that essentially touchscreen devices? How does that fare in exposed / outdoor environments? I'm thinking of wide applications, and something which wouldn't operate at, say, Tokyo Subway levels of use and demand aren't particularly amenable.
(That's tabling the discussion of whether or not you'd want to have per-use charges for transit use or want to offer that as a public service, or only filter based on individuals, etc.)
Field range might be set by speed-of-light delays. Roughly a nanosecond per 30cm (about 10 foot).
Given a 4 GHz clockspeed, your time resolution is about 0.25 nanosecond, or 7.5 cm -- call it 3 inches.
Yeah. I ... had to edit the URL, as I'm used to specifying https rather than http these days.
There are a few other flags raised about that particular implementation, though the concept itself is the key point. The idea of a signet ring to authenticate, sign, access, pay, claim, and/or decrypt seems useful.
You’re not alone - I do that every time I copy or share a URL too, it’s a good habit to get people into and sites that fail to provide working HTTPS don’t really have an excuse these days.
It appears that FaceId only supports a single face (unlike TouchId, which supports multiple fingers).
Maybe this use case isn't common, but my wife frequently needs access to my phone. Usually while driving, to change GPS routing, or playlist, or respond to SMS. With TouchId, she can do so without my PIN. With FaceId, she needs my PIN.
This strikes me as both less secure and quite annoying. Now, I have to repeat my PIN out loud while she types it into the device. Or, force her to memorize it (in addition to her own PIN, and I have to remember hers for the reverse situation).
For me it's not so much the paranoia or the degree of security (which is an arguable point in itself) but the commodity of it. Touch ID lets me unlock my devices without having to re-position my upper body or move them in (practically) any way, and Face ID feels awkward (I'm typing this on the device that is likely an exception to that - a Microsoft Surface Pro - and Windows Hello's face recognition works beautifully, but I am _always_ facing it when I need it to unlock, so...)
There's some app that uses the face++ api to do login using facial recognition, I can't recall the name of it off the top of my head but maybe you can find it with that info.
edit: how could I forget, it's the little known app called Alipay.\s
But wht was wrong with TouchID ? Were there any examples of it being weak security. What will be after Touch ID? Will Apple continue progress and built in PinchID - a tiny needle that sting you to test if you are you based on your blood/DNA?
This is a serious question. Because of there was noting wrong with Touch then why is it removed from new phone and replaced with Face ID.
Im also concerned about the data Apple will collect. I assume information about your face have to be very detailed so that this FaceID is secure. Is it really that of a push to see article in 6 months: FBI got a copy of whole Apple FaceDatabase and was able to identify and find a very dengerous criminal.
> TouchID was removed because it took up space on the front of the phone and Apple wanted the screen to be bigger. There's no deeper reason than that.
The Pixel handset has the fingerprint sensor on the back of the phone. It appears to work quite well. Much of this needless outrage could be obviated by allowing multiple simultaneous biometrics for auth; while taking the phone out of your pocket, place your finger on the sensor to initiate FaceID.
I confess, never in my years of phone ownership have I had that specific use case, or considered that it might be important for others. Do you find yourself doing that often? If so, can I ask what for?
Second, facial features are more unique than fingerprints - according to apple's own presentation, there's a 1 in 10.000 chance that prints from different people would unlock it. With face ID, this becomes 1 in 50.000 (iirc).
There are more people who cover their face (e.g. Muslim women wearing burkas) than people who don't have fingerprints. I wish they had both TouchID and FaceID.
You have to decide which type of biometrics to use when you choose your model, but that's fine for a lot of cases (such as those who wear burkas or those who have no fingerprints).
Hopefully the return period gives people enough time to be sure they've made the right choice.
I unlock the phone while it's still in my pocket, by the time it reaches eye level, it's already unlocked. And with a few muscles memory tricks, there's even a chance I have opened the right app without even looking in the fraction of a second it took me to take the phone out of my pocket.
I've actually never seen anyone doing that. I don't think that's a valid argument against Face ID. It's still faster than a PIN code and (seems) more secure than touch ID.
my phone is currently sat on my desk, about 10 inches from my right arm. I can, and do, check messages on it, by only repositioning my arm to unlock it.
I easily read any messages by glancing at the phone, never coming into any decent imaging range.
My phone is on my desk next to my laptop in the same place where I've been texting my wife for the past five minutes. I have turned on the front facing camera.
My phone can see the very corner of my head and about half of my eyebrow. I do not want that to be enough to unlock my phone.
Pretty much everyone at your standard 9-5 office job has their personal phone with them all the time and their personal computer with them none of the time.
One reason would be ApplePay. At least in Europe most payment terminals have their NFC sensors on their side, so you're supposed to hold your phone flat with the side of a box, so the selfie camera just points to the wall on your left. In that position, there is no way that the camera of your iPhone could see your face.
Compare this to Touch ID, where you hold your phone to that box with your finger on the sensor and half a second later you've paid - very convenient.
I would take a guess that Apple has not yet tested any iPhone X with a European payment terminals and will get a lot of angry emails once people try that combination.
I wonder if you can Face ID to work with your butt or other body parts, similar to how people got Touch ID to work for certain "non-thumb" parts of the body.
Honestly, the only downside I can see vs. TouchID is that you can in theory point the phone at the person and unlock it. However this is balanced out by not working while unconcious.
What I want to know is what face data is shared with 3rd parties like snapchat. That seems like the bigger threat, and no one is really discussing that.
Yes. If you have time to do it, also press 5 times on the on/off button, it disables Face ID and forces entering the PIN. It's an iOS 11 feature, I just tried on my iPhone 6 and it disabled Touch ID.
What if users were able to disable FaceID by configuring blinking x times or by having their eyes closed for a certain time period? Maybe requiring FaceID + a different PIN after recognizing that locking over the lock.
For phone-based biometrics, the PIN has always been a backup for the fingerprint/face recognition, because these things aren't 100% reliable. Not even considering any security aspects here, just practicalities, like your hands are dirty or your face isn't being recognised (perhaps you've got bandages on your face or whatever). Having the PIN as a replacement is a good thing in these cases, otherwise you could be locked out of your device when you actually want to use it.
For 'extreme' security situations, you might as well just have a long secret PIN and no biometrics.
The author started off saying how less than 1% of Dropbox users use two-factor authentication. What good is such a scheme when nobody is going to use it?
Apple need to create a system where stolen phones can be reported to them, Apple can then contact the owner/verify they are stolen. And then add them to a stolen list and disable calling/apps on those phones. And display an overlay on the screen THIS PHONE IS STOLEN.
Every iphone would come with an validate phone feature that is accessible even when locked that can authenticate the iPhone for anyone thinking of buying it.
The potential buyer can check if the iPhone is stolen by using the feature that is allowed to connect to the internet and validate the phone.
They need to make it where stolen iPhones are worthless so when you are getting mugged criminals won't even want it.
Obviously have an option setup where you can transfer ownership of your phone. Maybe with a 7 day waiting period.
Find My iPhone + iCloud is what you're describing.
Even a DFU restore of the device won't help a thief, as the activation process will simply ask for your iCloud login and will display a "Message From Owner" that you can set at icloud.com indicating the device was stolen, making it much harder for someone to purchase and claim ignorance about the origins.
Can confirm. We ran into this problem in my company as we had contractors setup phones and turn on Find my iPhone. We ended having to do some not so great things to make it so we could use the devices again.
y, that's true but I feel like Apple could do more so criminals don't even want to take iPhones, Apple could put things in place where they aren't worth anything on the secondary market. Basically unusable and could lead the cops to your house.
This way if you're on the subway you can have your phone out without worrying about getting robbed.
It your house gets robbed they would leave your phones because they can't be sold or reused.
They already are. You can sign in to your Apple ID, nuke your phone from orbit, put it in Lost Mode so that it can't be used for anything other than 911 calls, etc etc etc. And you can call your carrier, report it stolen, and bam - the ESN is blacklisted and the phone is a brick.
But the demand for parts is a lot higher than that for stolen phones.
You might not buy a phone from an internet cafe that sells 2nd hand phones, but you might buy a replacement screen from an internet cafe that does repairs.
As others note, it's already possible to remotely wipe/lock a stolen phone, and if you report it stolen to the carrier, they'll list the IMEI in a database shared to other carriers; many will refuse to activate/connect when their SIM is inserted into a phone with a known-stolen IMEI.
Short of visibly, physically destroying the phone, though, there's nothing you can do to prevent this. A criminal doesn't care that the phone is a brick; they'll sell it to someone and be miles away with the money before the buyer realizes the phone is useless.
Once it is reported stolen, it could capture the face of everyone who tries to access it. If you assign your police case number in iCloud, it could automatically send your police department these faces (with times and locations) to expedite recovery. With all the face databases popping up, they'll easily have a short list of people to follow up with.
Then again, this might create lots of other problems.
Activation Lock does exactly what you want and already exists. If someone steals an iPhone associated with an iCloud account, it's completely useless to the thief.
> Apple need to create a system where stolen phones can be reported to them, Apple can then contact the owner/verify they are stolen. And then add them to a stolen list and disable calling/apps on those phones. And display an overlay on the screen THIS PHONE IS STOLEN.
Why would I want to give a third party the ability to brick my phone? If Apple has that power, then a disgruntled Apple employee can use that power and a government can compel Apple to use that power.
My computing devices are my computing devices. Only I should have the ability to do anything to them.
The complete business model is taking the p*ss imho. I am seeing more than a number of people reverting to simple €20 nokias for basic telephone + sms usage on top of a gadget / secondary device for consumption or mobile business.
Apple sold over 200 million iPhones last year, and they make up a relatively small proportion of the overall smartphone market. I don't think there are very many people like you describe.
> It's alarming not just because the number is so low, but because Dropbox holds such valuable information for so many people.
I'd suggest that Dropbox users somewhat self select for those not as concerned about security as others. And more concerned about availability.
Dropbox does not encrypt your data server side (or at the very least, can easily decrypt it). And they have proponents of warrantless surveillance on their board:
> Dropbox does not encrypt your data server side (or at the very least, can easily decrypt it).
I think claims like this need to be backed up.
Now, obviously a biased source, but Dropbox itself says this:
"Each file is split into discrete blocks, which are encrypted using a strong cipher. Only blocks that have been modified are synced. Each individual encrypted file block is retrieved based on its hash value, and an additional layer of encryption is provided for all file blocks at rest using a strong cipher. Both dedicated internal security teams and third-party security specialists protect these services through the identification and mitigation of risks and vulnerabilities. These groups conduct regular application, network, and other security testing and auditing to ensure the security of our back-end network. In addition, our responsible disclosure policy promotes the discovery and reporting of security vulnerabilities." [0]
So we have files that are broken apart, each part encrypted, then the whole combination encrypted again, then lots of security auditing in-house and outside, and with incentives for people that discover flaws to report them. That seems pretty industry-standard to me, but I'd like to know more.
I really have some difficulty imagining a company like Dropbox, which knows how important the documents it stores are, being careless with security. Not saying they may not be, but it's going to take more than an HN comment that includes some politicized perspective about the Bush administration to convince me.
Furthermore, this article [1] claims that Dropbox encrypts files on the server even stronger than Google does. It also points out that user behavior is usually the main security hole, which will always be true with any service.
Yes, Dropbox uses encryption in transit and at rest, and security would certainly a top priority for any custodian of that much data. Dropbox is an industry leader, and in many ways sets the course for the entire industry to follow in this regard.
The issue with Dropbox is that they also have access to your encryption keys, which means they can easily decrypt and access your files, at their discretion.
According to Drew Houston (Dropbox CEO), they need access to your files to offer features like search, to be able to better understand how you're using the service, ability to integrate with third-parties, and for law enforcement. Some of these "trade-offs" are mentioned by Mr. Houston himself in this interview when responding to criticism from Edward Snowden a few years ago:
https://techcrunch.com/2014/11/04/dropboxs-drew-houston-resp...
More to the point, giving Dropbox (and their affiliates and trusted third-parties) permission to access to your files is a key provision of the Dropbox terms of service:
Our Services also provide you with features like photo thumbnails, document previews, commenting, easy sorting, editing, sharing and searching. These and other features may require our systems to access, store and scan Your Stuff. You give us permission to do those things, and this permission extends to our affiliates and trusted third parties we work with.
> I'd suggest that Dropbox users somewhat self select for those not as concerned about security as others. And more concerned about availability.
I would rather say that Dropbox is being used by many people without tech knowledge. And while they might be concerned about security, they often just don't know how improtant 2 factor authentication is. At least that's what I can see for some friends & family.
I have tech knowledge, but I had absolutely no knowledge that Dropbox offered 2-factor.
I don't keep confidential stuff in DB because, I know that the company effectively has access to everything. Nonetheless, 2 factor sounds interesting. So I look at this:
Explain? You read a page about two-step and say that explains why no one has enabled it? You claim to have tech knowledge, but are not able to turn on this simple security setting (or even know it exists, despite that it's listed very clearly in your Dropbox settings page)?
I use two-factor/two-step verification on every single service I have, including all social media accounts, email accounts, etc. Most major services/sites these days provide it. The way Dropbox does it is no different than any others; it takes 2 minutes to set it up. What did you find difficult about it?
> You claim to have tech knowledge, but are not able to turn on this simple security setting
GP didn't say _they_ can't enable it after reading the help page. I think they are implying that the very detailed help page looks long and complicated to a non-techie (who might not even understand the benefit of going through such a hurdle in the first place).
> Before enabling two-step verification, you'll receive ten 8-digit backup codes. It is very important that you write these codes down and store them somewhere safe.
Do any of your other systems handle recovery like that?
eeh, since when does u2a protect against back-end breaches?
thats just a security layer against phishing or password leaks...
don't get me wrong, i'd advice everyone to use it for anything remotely critical, because its pretty easy to setup and live with, but it really doesnt help against state actors or hackers that compromised the data servers.
A Dropbox hack is nothing most users have to protect themselves against. Same as with a google breach where GMail data gets leaked.
As a personal user of no special interest, no one would use a potential Dropbox vulnerability to just get your data. If you secure your end you'll be fine. That's different for big corporations or people with a public profile (e.g. politicians). In this case you have to ensure that malicious actors with a lot of money and knowledge cannot gain access. But then again, self hosting is likely less secure than Dropbox or Google.
> since when does u2a protect against back-end breaches?
Because if someone steals your DB password, they still won't be able to login to your account.
Maybe they won't have to, if they also stole your data and found a way to decrypt it, but since those are different things, it is plausible that there could be a leak of login information without a leak of data, in which case your two-factor authentication would keep the attackers out of your data.
> since when does u2a protect against back-end breaches?
It doesn't have to. For 99.999% people, the two most realistic threats are:
* There is a keylogger on some computer where you access your Dropbox, for example at a print shop,
* You use the same password on many sites, one gets compromised, and automated bots try to access your Dropbox account.
Confidentiality, Integrity and Availability (CIA). Those are all part of information security.
This is actually quite interesting, since it is a bit like CAP theorem. When you increase confidentiality and integrity, you might be affecting availability in a negative way. Take Dropbox as an example. Since they don't have efficient end-to-end encryption offering, they can offer you password resets (=availability of data is good). Add in secure end-to-end encryption and password resets won't be enough, you need to have in addition good backups for the encryption keys to ensure access to the data.
Which is why it's nice that none of the biometric auths can be used without also having a PIN for backup auth. And also why it's nice that you can disable biometrics entirely.
The comment was aimed at the Snowden example. If Snowden thought his pin was compromised he could always change his pin, but once his face is compromised what does he doe?
That breaks both ways: only a fairly advanced attacker could "change their face" to access your phone. So Face ID still covers the 99% of cases Troy talks about. For the rest, I'm not sure a PIN works, either, so they'd have to use a password.
The only secure thing is a thing that only you know and only you can verify even if you are freely observed.
That is, shared secrets between you and your trusted device (meaning passwords) are the singular thing that provide authentication securely. Your password cannot be extracted from your head (yet).
That being said, if your risks are mundane then the benefits of biometric authentication far outweigh constant password input, not to mention that constantly entering your password exposes you to other side-channel attacks.
Biometrics for simple access, passwords for changes, modifies and access to sensitive information.
I know several techniques that would be highly efficient in extracting my password from my head. Some don’t even require physical access. (This is a cryptic way of saying that credible threats of violence or actual violence would compel me pretty quickly to tell my PIN.)
I wonder if you could hack a person's password from their mind by forcing them to go through the alphabet and monitoring their heart rate / brain activity for each letter of their password. There must be a way to detect based on their reaction when you're on the right character, like a lie detector.
I mean, I guess at that point you could just torture it out of them, but I wonder if this could work as a method that wouldn't count as torture.
The ease and quality of video surveillance from cell phones makes me think passwords are not that secure if other people or devices can observe you entering them.
According to the article, the phone won't unlock if you aren't attentive, i.e., looking at the phone with your eyes open with tiny imperceptible eye movements.
Not looking forward to the video services that demand that you pay attention to ads before they let you view their content, now that they have the ability to check.
Do they actually have that ability, though? There's a big difference between "the authentication mechanism has access to this API when you're authenticating" and "everything always has access to this API at all times". I haven't seen any indication that this will be generally available on-demand to applications, and Apple's track record on this stuff isn't terrible (largely because Apple isn't existentially dependent on advertising like some other vendors of mobile tech).
The camera and the sensors can be available but I'm certain the auth mechanism itself won't be accessible. AFAIK they have a dedicated chip for this like Touch ID and what software can is limited.
For me it's a simple question of cost vs. reward: do I care enough about the security of whatever data is stored with a company, that I'm willing to give the company personal information, when their terms of service almost assuredly give them complete license with it?
This, of course, starts with the question: do I even want to put this in the cloud to begin with?
From the OP: "the data is stored in the iPhone's secure enclave and never leaves the device". It appears that this has nothing at all to do with the cloud. And if you don't trust Apple's word here, then you also have no reason to trust that they (or any other handset maker) haven't programmed the camera to surreptitiously take and transmit photos at all times.
This article doesn't really say much of anything. Troy pretty much just summarized a few slides from the Apple event and then ended the article saying he was going to buy an iPhone X and is interested to see how Face ID turns out. I really gained nothing from reading this.
The point of the article is that many people are complaining about FaceID's security in abstract. The alternatives, like the relatively common 'no password' or '123456 pin' are much LESS secure than FaceID.
The other arguments people are making tend to be very fanciful scenarios that don't apply to normal people (state actors, high quality makeup shops with a perfect face mold of your face, etc).
It may not be perfect but like TouchID it's probably way better than the alternative.
There is an opposite use case which will make me consider getting an iPhone X for a long time.
Every so often, I leave my phone at home and I need my wife to get some info from it. Or my phone runs out of batteries and my wife's phone is there, and I use to to make a phone call.
Oh. Thank you for pointing this out. It was not clear to me that this is the case. From a quick look at their initial marketing material, it seemed like the way to unlock it was to wave it in front of your face.
1 in 1 million FAR (false acceptance rate) vs 1 in 50,000 is pretty misleading (as is Apple tradition).
Do you think someone trying to hack into your phone would shoot 1 million random pictures/3D profiles made from Facebook pictures at your phone, or do you think it's far more likely they will already start with your profile made from online pictures?
That will likely make the success rate even higher than with fingerprints, as it's significantly easier to get someone's photos than it is to get their fingerprints.
> Laughs were had, jokes were made but the underlying message was that Face ID isn't foolproof. Just like Touch ID. And PINs.
No, not "just like". There is a huge difference between most fingerprint authentication mechanisms and most face unlock mechanisms (at least so far). Most of them could be tricked with a 2D picture - including Samsung's latest. It's very annoying to see such a statement from someone like Troy Hunt. Plus, I have a hunch he'll be eating many of the words he wrote in a few weeks when Face ID will prove much easier to hack than Apple made everyone believe it will be.
Online pictures do not contain depth or other needed information.
Regarding the false acceptance rate, they would not specify and compare to the already-known Touch ID FAR just for no reason. Logic behind the number is not that someone would actually try one million times (IIRC - just 5 failures will wipe keying material and force passphrase entry), but rather, they're saying derived data for Touch ID had 50,000 possibilities, and there are 1,000,000 possibilities for the Face ID derived data.
I think it would be really cool if it turned out that it could tell the difference between some pairs of identical twins due to seemingly imperceptible differences.
I'm sure it's not 100%, but I'd bet it's close. Certainly by young adulthood the identical twins I've been around have been relatively easy to distinguish.
That's sort of my theory. But I wonder how much of people's ability to distinguish identical twins is based on physical differences vs more subtle things like how they carry themselves/interact with the world.
Some twins I've known deliberately create physical differences like different hairstyle so other people can recognize them easily without interactions.
This is what has been missing from every discussion of this issue that I've seen so far.
The face scan isn't "insecure" even if you're worried about border searches. Just turn off your phone when you get in the security line! Pin will be required on start.
Pin is also required when plugging into a new computer.
The rest of the time when you're going about your daily life, and are not worried about a government agent spoofing your face or pointing the phone at your face, you can use this nice feature.
Most people will be _less_ secure without it. They don't want to punch a pin every time they want to tap their phone to pay for coffee. So without the face scan feature, they will opt for no security at all.
The reboot/plug-in pin requirements change the discussion quite a bit, but are usually ignored, seemingly so bloggers can state the obvious "but someone can spoof your face!"