I'll take the limited risk. I've had to contact Fastmail support and it was a breath of fresh air. It's a bit absurd that something so fundamental as email has essentially no support from a company as large as Google; it's not a bug-free product.
I suppose eliminating humans is a security win, but HN is full of stories of AI systems failing and banning accounts for essentially nothing. Not having a human to appeal to is far riskier to me. It's not like these AI systems can't be gamed to knock people offline. I'll take the risk of having humans involved -- it's far less stressful.
> It's a bit absurd that something so fundamental as email has essentially no support from a company as large as Google; it's not a bug-free product.
I'd be willing to bet that gmail has a couple of orders of magnitude more users than fastmail while also providing a substantially bigger inbox (than the cheapest fastmail option), and providing the whole thing for free. I dont think it's surprising that they make trade-offs to support that model. Just think of how many support staff you'd need to support 1.5 billion users!
> HN is full of stories of AI systems failing and banning accounts for essentially nothing. Not having a human to appeal to is far riskier to me. It's not like these AI systems can't be gamed to knock people offline. I'll take the risk of having humans involved -- it's far less stressful.
I don't think the trade off is that simple. There are plenty of stories of support staff getting scammed in to incorrectly providing access to accounts. Is one better than the other? It's not a clear choice imo.
>> I dont think it's surprising that they make trade-offs to support that model. Just think of how many support staff you'd need to support 1.5 billion users!
Google has a shitload of money, they can afford hiring enough staff. Cost is a lame excuse here.
The provide support for users that pay them, and for advertisers. Their business model is to sell things, and it is working pretty well. They can certainly 'afford' it, but they don't want to, and your complaint as a 'free' tier user means little to them.
What is needed is legislation or some practiced standard regarding real-person online-id so that losing access to your email account doesn't nuke your ability to operate online in a way that requires you to verify your identity even pseudonymously.
I've managed a Google Workplace account (~30 paid users) for over a decade and have never had support respond in less than a week. And each time I got a canned response. I just don't even bother anymore, which is likely what they want. I don't think this is a free vs paid thing. It's just the way Google operates.
That's weird, I have a Google Workspace account with less than 10 paid users and had several in-depth conversations with support personnel on SMTP and DNS setup issues. It was outsourced to an overseas call center, but they did respond to my queries.
That said, I have issues with spam being delivered to my organization's group aliases and I can't report the spam because it flags it against my group alias not the original sender (!) I can't turn spam filtering on the group alias because it flagged legitimate emails from our customers. So I'm kind of stuck between a rock and a hard place, with no one at Google to talk to about it.
It depends how much money you spend with them. If you shell out for expensive support in GCP you get guaranteed response times, dedicated account reps and so on.
I'm paying $10 a year for my email and the one time I had an issue I got a response within 8 hours and a follow-up after everything was resolved. It shouldn't require Fortune 500 levels of spending to get basic service.
Not really. It sounds like you don't have a sense of how much it costs to hire people, how many people are needed to provide oncall support, and the scaling cost of managing and training people.
My main email account was through Hotmail in 2000, and it got shut down that year due to a social engineering attack. The guy who did it even told me he was going to do it first. I didn’t get to have it covered in any mainstream news headlines either :P
> AI systems failing and banning accounts for essentially nothing.
The strongest statement you can make about the standard HN Google account outrage post is that the complainant is unaware of or unwilling to admit to the behavior that got their account suspended. Drawing the conclusion that all such complaints are false positives is not warranted by the evidence.
Unless you're implying that the false positive rate is 0%, then it's still a concern for me. I've seen cases where the user obviously did something in error but had no chance to appeal. E.g., they uploaded a photo that got flagged and then lost access to their email, domains, YouTube content, any form of social login, etc. My email account is too important to me to risk with an automated system without an option to appeal to a human. That risk is much higher to me than someone social engineering their way into my Fastmail account.
To me, this is analogous to backing up your BitLocker key with your online Microsoft account. Is it the optimal approach to security? No, but the far more likely risk factor is losing your key locally and then losing access to all of your data. I'll take the peace of mind that comes with knowing I can speak to a human if things go sideways. As an added benefit, I've been able to speak to a human when routine service issues have come up and it's been a pleasant experience.
An extremely underrated (and insightful) point to consider.
More generally, how do you actually get a measure of risk between two providers, when the absolute frequencies of measurable events are very low?
It seems plausible to me that FastMail could have 10x or 100x the level of security incidents as GMail, and it would still net out to an undetectable difference in the number of public complaints.
When I worked in the anti-abuse business, account security was tracked by lurking in organized crime fora and determining the market price for stolen accounts. I don't know what it looks like for FastMail, but I do recall that the range between good and bad platforms was huge. A stolen Google account was like $10, but stolen Yahoo! Mail accounts were more like a nickel per thousand.
(Architect of Fastmail's login/account recovery protocols here.)
Firstly, I will say this incident was unacceptable, and we were deeply sorry about it. However, it is also the only time it has happened in our over 20 year history (to the best of our knowledge of course). We already had several projects underway to improve the security of account recovery at the time, which unfortunately hadn't quite landed yet. Since then we have introduced an automated recovery tool with a very carefully designed flow (more info: https://www.fastmail.com/blog/security-account-recovery/) that securely handles most common cases (e.g., forgotten password, or user's account stolen due to password reuse/phishing). Human support is still available, but any account recovery request can only be handled by senior support agents who have undergone rigorous training, and in the case of any doubt are escalated all the way up to our senior security engineers.
Elsewhere it's been mentioned that different people may have different priorities in balancing ensuring they don't lock themselves out, versus ensuring an attacker can never access their account. We provide some flexibility here. If a user has 2FA enabled, we must verify two separate means of verification to grant access, whether via our automated tool or support-assisted recovery. Users can also submit a support ticket to request we add a note to their account to never do human-assisted recovery.
I realise it's very hard to assess the security competence of an organisation from the outside, and for what it's worth, we think the Google security team also do an excellent job. But overall I think we do a very good job of keeping users secure while not locking them out of their own account.
> Elsewhere it's been mentioned that different people may have different priorities in balancing ensuring they don't lock themselves out, versus ensuring an attacker can never access their account
Thank you, this is the most important observation.
Service providers should be providing flexible mechanisms to meet different needs, they should absolutely not be imposing a one-size-fits-all policy. That's the fundamental wrongness with google/facebook and their ilk.
Only I know what the security levels I need for any given account I own. I must be able to configure the policy.
Sometimes, I value my access above all else. With some other account I may value preventing access to others even at the risk of losing access myself. Other variants are possible. Only I know what the correct policy is in any given case.
On the contrary, I would argue this is the exact mindset that makes Google so bad at securing their systems. Every single large Google platform is also the leading distributor of its kind of malware, ultimately because computers are stupid and once you understand what they are programmed to handle you can work around them. Humans can become suspicious and can be held accountable, computers do what they're told and nobody is taken to task when something goes wrong.
I would contend that if you cannot reach a person, you cannot trust a system. And that has generally held in the entire history I've been on the Internet. I chose my web hosting by who had phone support, I've had the CEO of Fastmail respond to my support tickets before. I have yet to be betrayed or compromised by a single platform where humans were involved, but automated systems have failed me regularly.
This is true of offline systems as well. If you want a security system to protect your business, you may have keypads and sensors and things, but you also have a monitoring center staffed by people who can see events in real time.
I think our industry has had a fantasy that complex enough math problems can provide real security, but I would hope by now the cryptocurrency market would've put that silliness to bed by now.
I'm not sure how you can make that judgement without extra context (that is almost certainly tightly held within google). For example, what actually is the error rate? How does that compare to improper access that is successfully prevented?
Obviously any real person losing access to their account is a rubbish experience for that person, but an error rate of 0% is not possible with any system (including those with plenty of humans involved) when there are billions of users involved. I think a much more interesting question is "what's the acceptable error rate?"
I highly doubt that Google even tracks the error rate. I mean that you somehow need to make a viral post on HN to get your account back is evidence of that, they don't even know they made a mistake. Also based on the number of posts that we see here it's a nonneglible error rate. How many users does HN have a couple of 10thousand. So 32 posts makes it maybe 1 in a 1000, even if it is a 1 in 10000 or even 1 in 100000 error rate that's a pretty high probability to loose your online identity.
So if there is no way of contacting a human if you have been locked out of your account, how do they determine a false lock out? I am serious, every thread here on HN about being locked out said that the affected person tried all other avenues and did not get anywhere near a real human. So that would make all research flawed wouldn't it? Because it simply checks that the algorithm is consistent. Let's not assume malice. However, that doesn't make it much better because it means the account abuse quality research team is borderline incompetent.
> So that would make all research flawed wouldn't it? Because it simply checks that the algorithm is consistent. Let's not assume malice. However, that doesn't make it much better because it means the account abuse quality research team is borderline incompetent.
I don't think it follows that you need to speak to an affected user to confirm they were improperly locked out of their account. You could have a human review the account history and the steps that led up to the suspension and so on to make a decision about whether it was a good decision or not. No doubt you'd get more info if you spoke to the affected user, but that in itself is not perfect (a scammers whole game is trying to convince google they're someone else, after all.)
I guess what Im getting at is that I think there is a lot of grey areas when you're trying to do account recovery at scale. No doubt there are cut and dry cases where people are locked out of accounts they've used for a long time (and that's shit for the people affected), but there are also plenty of scammers who'd put a lot of effort in to convincing a support person that they should have access to an account. I just don't think having support staff is the panacea it is often portrayed as.
One can easily make that judgment. The absence of extra context is a good reason to make that judgment. Google has a reputation for closing accounts and refusing to communicate. Google does not contest this reputation. They give no numbers and share no rate. "What's the acceptable error rate?" isn't an interesting question if you have no numbers. We do, however, have other companies and service providers.
> How does that compare to improper access that is successfully prevented?
Last year I had an email from immigration services and I had to reply within 10 days. If I lost access to my email, I would be deported right now. They don't call, they just email. Why? I don't know, but that's what it is.
On the contrary, if someone get's access to my email, what can they do? Send random porn to my contacts? No-one will care.
As long as I can call the provider and fix the problem, it is irrelevant.
* For your own security (from theft) we'll hardware lock your phone. Best to throw it in the dumpster if you forget the password.
* Can't allow people to repair their own hardware. What if kids try to do it and end up burning the whole apartment block. Best to forbid it for security.
* You can't film public institution: it's a security issue.
* And now: can't allow humans to operate business decisions. What if they're socially engineered? Best leave everything to automation and fuck you if you slip through the cracks.
It's funny because in the airplane industry, even though planes basically fly themselves, companies still want pilots, because that's what people are best at: solving unique problems as opposed to repetitive issues.
A critical question is what threat models you're worried about:
Are you worried about an individual interested specifically in you, Jeff B, to get something worth many thousands of dollars that they know you have? Don't put a human in the loop, they're going to track you across Facebook/LinkedIn/local government resources, they're going to know more about your car registrations and when you bought your home than you know about yourself, and they're going to be able to very convincingly social engineer a human in the loop if one exists.
Or are you worried about a group of hackers continuously crawling the web for a database dump from some service you and ten thousand other people signed up for, or some flaw in the authentication sequence to automatically sign everyone in the database and all their contacts a spam network for pennies per person? Their scheme falls apart if they have to call a human, because it's just not worth the time to look up your public records and talk to a human about you.
Second, what happens after you get hacked? Are you more concerned whether you no longer have access to something very important to you? For example, if you've distributed business cards or have contacts stretching back decades with jeffb@gmail.com, losing that account might mean an old friend or business contact fails to find you again. Having a human in the loop for the last-resort password reset can prevent completely losing access.
Or are you more worried about someone getting access to the data behind your login? You've presumably got backups, so you'd rather no one ever had access again than some malicious third party got the password to your crypto wallet, SSH keys to your website, or other private data.
Those have very different ideal responses. Unfortunately, most people tie both categories together in their single Google account, or in an Amazon account tied to both shopping and AWS resources.
It is a fantasy that you can have humans adhere to procedures. That's the whole underlying problem of social engineering. Just take the human out of the loop.
"I don't know if you wanna entrust the safety of our email to some silicon diode."
All joking aside:
I mean... we already know that taking the humans out of the loop leads to undesirable consequences (like losing your Google account with no recourse). So the only question is whether or not the consequences of one scenario or the other is particularly worse.
See, that's the fundamental hubris/weakness of the "Silicon Valley current ethos" (well, most tech ethos today) taken to the extreme: taking the human out of the loop. Then who/what does it actually serve?
(or maybe, they perfectly know it, but don't saying out too loud)