So 23andMe failed to identify brute force and credential stuffing access of 14,000 accounts. They also have a feature that grants those 14k compromised accounts effective access to 6.9 million accounts.
23andMe then claims that poor password practices are responsible for this data leak.
> “Therefore, the incident was not a result of 23andMe’s alleged failure to maintain reasonable security measures”
I've not run security at an org of their size, nor have i touched their service, but i have to imagine there were some patterns to this breach that would have been reasonable to account for ahead of time. Did those 14k accounts also have their email provider accounts compromised? Could a login ip-range check have prevented all of this? 2FA seems like an obvious answer here but clearly that was more than could be expected.
There was no brute forcing done. The credentials were from other sites that were leaked via Tor and the users on 23andMe used the same email/password combo. That’s why you don’t reuse passwords, when possible.
Nothing on 23andme’s end failed unless you consider someone using a correct user/pass combo while not being the owner as a fail on the part of 23andMe rather than the end user.
In 2023 an org of their size and with such sensitive data needs to give a bit more of a fuck about who is accessing users' accounts. Mass leakage of potentially reused credentials is an ancient concept at this point and should have been on their radar as an attack to protect their users' data against. Basically, they shouldn't have just been relying on passwords to authenticate users. Many orgs of their size and with much less sensitive information than the literal genetic data of their users do a lot better.
If the user logs in from a device we haven't seen before, or they haven't logged in for more than a year (or six months, or two weeks etc), send them an email challenge.
Maybe the email address on file is also cracked but it'll make it harder, and it's more work for the attackers.
Keep in mind that will force everyone who doesn't keep cookies to have to do that at every login.
Github is like that right now, and it's quite a pita; sure, it's not a great idea to continually delete all cookies without exceptions, but in some cases it's currently hard to avoid it (low-end smartphones where Firefox is too heavy)
That's true. Also, if I cut my keyboard in half, it's a lot harder to use Google Docs.
I sympathize, but at a certain point if you've gone out of your way to disable the features that the developers have added to make your life easier, you just don't get to complain about it.
It's a perfectly reasonable compromise though if you can't force MFA for some reason. There are many sites which do this today.
You don't even need to rely on the cookie if you're worried about the ux for cookie clearers. You could also whitelist an IP address (or even a subnet) when they verify the email, and it would have been "good enough" to prevent this particular situation.
They had no issue making MFA mandatory after the fact, so they should’ve had no issues making it mandatory before the fact.
> After disclosing the breach, 23andMe reset all customer passwords, and then required all customers to use multi-factor authentication, which was only optional before the breach.
As others have pointed out, there are also other options. Such as an email challenge when noticing high traffic, or damn, even when noticing a new login from a new device or IP that is unfamiliar. Many services do this all the time.
We’re talking about raw DNA data here that is accessible. You’d expect levels of security as implemented by banks if not better, not “Little Timmy’s first blog” levels of carelessness.
For starters, I didn't downvote you and couldn't even if I wanted to since HN doesn't allow you to downvote replies to your own comments.
For another, I got the threads confused and thought you were talking about the accounts that shared access with compromised accounts. Sorry. Relax yourself before you jump to immediately into your persecution complex.
> For starters, I didn't downvote you and couldn't even if I wanted to since HN doesn't allow you to downvote replies to your own comments.
Fair point.
> For another, I got the threads confused and thought you were talking about the accounts that shared access with compromised accounts. Sorry. Relax yourself before you jump to immediately into your persecution complex.
Apology accepted.
Perhaps it might be wise to dial the snark down a bit, regardless of if you’re confusing threads or not.
It ads little to the discussion at hand and only elicits replies with a similar tone.
You probably won't answer this now since it's so much later but can you point out where I was snarky? I'm being absolutely serious here because I don't understand where I was snarky to where you jumped to "So maybe save your downvote next time until you know what you’re talking about." Just trying to avoid the immediate escalation in the future and maybe I'm being aggressive and not knowing it?
You can't see just anyone's DNA. You have to opt-in to the program and share it with specific users, in nearly every case someone who is a distant relative that is tantamount to a stranger.
Many orgs will use location and connection types to filter this.
For example if i proxy my connections through a VPS or VPN i will OFTEN either be outright denied access, or at best get sent to a validation step (most often they shoot the email an verification code that i have to plug in).
I will often route traffic through a linode for reasons. And sometimes use a VPN here and there (ie: mullvad). In almost all cases this will trigger anti-spam measures on sites, some so intrusive its borderline unusable (ie: Youtube and google with recpatcha).
You don’t know that they weren’t. HIBP is not omniscient. It doesn’t automatically get a list of leaked account info unless that info is published publically. Based on the current evidence I’ve seen so far, that’s not the case. It seems like the breached data was sold privately on the dark web and was tested for months via a botnet. It also seems like the leaked data included either IPs or last known login location info which means someone with a sizable enough botnet could have used that info to login from nearby locations, thereby bypassing any prompts triggered by “new locations”.
NIST SP 800-63B "Digital Identity Guidelines" specifically requires preventing users from setting passwords which are known to be commonly-used, expected, or compromised.
How would you know they're compromised before you know they're compromised? According to the info I've read about this, the site and database that was breached was not published publicly but was sold privately on the dark web.
They have an email right? Many services automatically detect suspicious logins and asks for additional verification even if the user hasn't specifically turned on MFA.
That’s the problem. There wasn’t anything suspicious. From what I’ve read, the “hackers” used a botnet, aided with location/IP data from the leak. There would be nothing suspicious about a login with the correct email and password coming from the right location.
Would be interesting to know how they were testing authentication. Were they using a botnet of any sort? Otherwise for every "valid" user/pass combo from an external leak they tested there'd be several failures. A single (or multiple) hosts smashing auth attempts should raise flags. They didn't "Brute force" one user account at a time, but they did brute force the authentication system in general.
The current info that's been released seems to indicate that they used a botnet over the course of several months and had access to the "last known login location". So there wasn't any "smashing" happening and no "you're signing in from a different location" blocks either.
Require MFA to be enabled when it's an issue of indirect access to personal data of potentially millions of other users on the site. Any retort like "okay well that might just hurt the platform's ability to attract users with that sort of security prescription," gets cement shoes in the bay. There's absolutely no reason to allow known dated forms of authentication to access user data of other 23andMe subscribers. Of course people are lazy and won't enable it if nobody is telling them they have to, most people are completely ignorant to how rampant these kinds of stories are because they don't subscribe to tech news. Somebody needs to be the adult and force people into the correct lane.
It's fairly common to do traffic analysis, and look for behavior that is not typical. Things like a sudden jump in the data being downloaded, access from other countries, changes in IP addresses, log ins from new sites.
There are many security tools that use AI to identify patterns of access and alert on changes.
That assumes that haveibeenpwnd knew about this leak which would only be possible if the leaked data was posted publicly. It doesn't seem to have been as the hacker was looking to sell the information.
How would they know which passwords were breached? The data leak was from a different site and not from 23andme. Additionally, current evidence suggests that the breach was not known on haveibeenpwned prior to it being used and the reused credentials were tested over a period of months using a botnet.
> Additionally, current evidence suggests that the breach was not known on haveibeenpwned prior to it being used
Totally fair, I haven't been following this really closely.
That being said, if someone re-uses passwords once they probably do it a bunch of times, so it's odd to me that they didn't have a process to detect reused passwords and force a change.
Orgs with this kind of data will at least track geolocation and maybe device information and require proof despite a correct password as well as attempts to access multiple accounts from an address block. Many also incorporate the have I been owned leaked password database .
The have to act responsible when handling and caring for this kind of data. It’s irresponsible not to.
If you’re not providing secure MFA as an option and invalidating breached credential pairs via HIBP, you’re negligent as an idp. 23andme failed hard, and they should own it.
FFS, default to magic link login via email if you have to. At least then you're relying on Google, Apple, or someone else for auth (in most cases of unsophisticated users).
It is an option. Every user has the option to setup MFA when they set up their account. The fact that people reused their passwords and chose not to setup MFA is not 23andme’s fault.
As a 23andme user who has filed a complaint with the FTC, purposely opted out of arbitration and intending to join a class action, and is responsible for customer IAM at a fintech, I politely disagree. Poor IAM and AAA decisions are a choice, and there must be consequences for resulting harm.
I absolutely have an axe to grind against consumer harm incurred by lazy and/or negligent technology companies (all companies, really, just scoping for this convo). Guilty as charged. When good behavior is not forthcoming, spin up regulators and the legal framework.
EDIT: I do not believe this is an unreasonable position to take. Years ago, I interviewed with the CTO of 23andme and almost took an infra job there (comp too low) ~12 years ago. I am a customer. I have mostly good things to say about them as an org. That is not a free pass when you do harm. Do better, it is not hard.
It’s not lazy or negligent on the part of the website when they offer additional security and users choose not to use it. 23andMe asks multiple times for users to set up 2FA and apps like 1Password and Bitwarden recognize that it’s available and prompt users to set it up.
It is when those users' passwords unlock not just their own data, but that of millions of other users as well.
Alice could have set up 2FA and adhered to all the best practices, but she still got her data stolen because Bob used "hunter2" and was hacked.
14,000 accounts compromised, 7 million users' data taken. There's no way 23andMe should be able to offload their responsibilities to Alice's cousin Bob.
That's not what happened. The 7 million users didn't have their data stolen. The compromised accounts had access to data that those users opted-in to share with those accounts.
Imagine that you have a bank account and you share access to it with a family member. If they use "Password1" for their password and someone gets into their account and then, by extension, has access to whatever level of access you've provided them to your account, is that the bank's fault? Is it yours? Is it your family member's?
Your analogy doesn't fit here. There is no scenario where accessing the accounts of 14,000 banking clients would then blow up to several million clients' accounts. Any bank that even offered this "feature" would, yes, be at fault.
There seems to be some transitiveness going on here. Let's go with the banking scenario: I give my son access to my checking account, and I also give my business partner access. My son is a dumbass, and uses the same password for everything. Now my business partner's info is taken. His parents get hacked as well.
From 14,000 to 7,000,000 is quite the amplification. That's on 23andMe and nobody else.
The analogy does fit. You're just mischaracterizing it. To continue on with your example, that's not what happened with 23andMe. If you gave your son access to your checking account via some account info sharing feature and someone gets access to his account, they have access to the same accounts he does and only those. Your business partner's info is safe unless he also shared his account with your son and his parents' info is safe unless they also shared with him.
The only info that was available form the 7 million accounts was specific info that they chose to share with the other account. If they chose to share everything, then everything would be available. 23andMe can't prevent their users from being idiots.
> The new NIST recommendations mean that every time a user gives you a password, it’s your responsibility as a developer to check their password against a list of breached passwords and prevent the user from using a previously breached password.
This assumes the breached password occurrence was known in advance and, from what I have read so far about this, was not the case with the 23andMe accounts.
You won’t have to. They could have forced MFA and been done with it. That doesn’t make it their fault that they didn’t. It just means they could have done better and assumed that at least some users (read: most) are ignorant about best practices with sensitive data. It’s not something they would be legally culpable for, though.
I agree that is a good idea, but that doesn't lay the blame of this so fully at their users' feet. This won't always catch password reuse attacks (now called "credential stuffing", I think), and is only a partial mitigation.
Unfortunately the majority of people aren't very tech literate. We have to remember HN is far from average. The company I work for forces MFA and I think if you have sensitive data like this, yes, you should force MFA. Truth be told, it's not going to enter the public lexicon until some big players start forcing adoption. Rule of thumb: if my grandma wouldn't know to do it, I shouldn't expect my users to do it. If you expect your users to use bad practices, then you're not doing your job well. Idk if we should say it's somebody's fault when that somebody is a non-expert and is making a reasonable choice.
But they did provide secure MFA as an option, and it seems the credential pairs hadn't shown up in HIBP because they had been privately purchased via the hack of a different site. The logins were even using locations that matched previous ones.
So how did 23andme fail so hard here? Literally nothing you've suggested would have prevented this.
> So how did 23andme fail so hard here? Literally nothing you've suggested would have prevented this.
They made MFA mandatory after getting popped, at the same time they changed their Terms of Service to attempt to evade liability. Why did they wait to get popped? Either negligence or an active decision was made to avoid support costs and engineering time for mandatory MFA was made. Also, a magic link I suggested would've solved for this, unless attackers were going to get into everyone's inbox with leaked creds to get the link to login and get that session token. Definitely more effort than credential spraying 23andme login endpoints.
A magic link is just a form of 2FA. And the reason not to make 2FA mandatory isn't about engineering costs -- they'd already built it. It's because a lot of users don't like it. I personally despise sites that require a magic link rather than a password, because it takes me 30s to log in instead of 1s.
There are lots of commenters here on HN in this story saying they don't think sites should make 2FA mandatory. There are lots of usability problems with 2FA as well -- if you lose a device or when traveling.
You're basically saying that sites that allow you to log in with just a password, if you choose, shouldn't be allowed to exist. That seems unreasonable to me.
> You're basically saying that sites that allow you to log in with just a password, if you choose, shouldn't be allowed to exist. That seems unreasonable to me.
I'm saying sites that host information of value, such as genetic information, should not be allowed to support login with just a password. That seems reasonable to me, and a regulatory gap to be closed. If you don't want to use MFA or other secure auth systems on Reddit or Twitter, by all means, I'd agree that secure auth for low value systems might be overly burdensome to a user population. There are well worn paths if you lose MFA (remote identity proofing, mailing an OTP to known addresses, dinging a credit card $1, etc) that are all reasonable and affordable to implement.
Is your argument that the data 23andme hosts is not of value or sensitive and it should not matter if their security story is lacking ("just passwords are fine, yolo")?
EDIT: I think we fundamentally disagree on the issue.
> such as genetic information, should not be allowed to support login with just a password. That seems reasonable to me
But that isn't obviously reasonable to me, that we need a law for that.
What if I don't think a bunch of estimates based on a bunch of my gene readings is all that valuable? Why not let me choose to use just a password?
But if I do think it's super valuable, then I can use 2FA. (And also obviously choose not to share any of my information with anyone else on the site.)
Why should it be the government's job to remove that choice from me?
How about a middle ground, where if I set up MFA on my account, I automatically disable the access from "distant relative" who haven't setup MFA, even if I want to share my data with them. Because fundamentally this incident is not serious if such transitive access was not employed in the first place.
And since this is a specific access pattern for 23andme, I agree we shouldn't involve government here.
Google defaults to Passkeys now [1], and has very aggressive heuristics around logging in [2]. They also maintain their own version of HIBP internally [3], and will force a password change [4] under certain circumstances.
They are doing this because when they have high assurance of your identity (and your account hasn't been taken over), that is the best time to issue the cryptographic credential (the Passkey) which improves go forward security of the account. Over time, accounts should filter over to Passkeys, and at some point, they will likely deprecate passwords (or require high confidence you are you to login with just username and password, vs a Passkey). I've had a discussion with someone on the project at Google, and they could only say "stay tuned" about what comes next. To be clear, I'm not divulging anything beyond what Google made public in their blog post and a bit of speculation on my part.
> Do you think google is deactivating people based on HIBP? If not why do you think everyone else should?
TLDR "password resets and account lockouts vs deactivating users" and "because it is good practice to protect your users and their data from compromise"
[4] https://support.google.com/accounts/answer/98564?hl=en ("If there’s suspicious activity in your Google Account or we detect that your password has been stolen, we may ask you to change your password. By changing your password, you help make sure that only you can use your account.")
I just created a new gmail account to test this - it asked me to create a password (minimum 8 characters, I used lowercase letters and numbers only) and didn't say anything about MFA or passkeys. I'm not going to fact check every other claim since the first one failed so utterly.
> This means the next time you sign in to your account, you’ll start seeing prompts to create and use passkeys, simplifying your future sign-ins. It also means you’ll see the “Skip password when possible” option toggled on in your Google Account settings.
Did you even look at their provided links? You took the time to create a new account, why not actually look at the provided links to see what is being claimed in the first place?
I read the link - I don't think "we will hassle people about this eventually but not even give them the option at signup" is the traditional definition of "default" though. Do you?
Prompting on first sign in is pretty “default” to me.
I highly doubt you read the link, otherwise you wouldn’t have gone through the whole sign up process just to prove something isn’t a “default” according to you. You’d have just referenced the article and made the exact same point.
Other than allowing at least 14,000 login attempts from the same system without blocking suspicious activity. Nor using services like haveibeenpwned to prevent users from reusing passwords.
It wasn’t from the same system. It was from a botnet. The account credentials were leaked on Tor. Is haveibeenpwned omniscient? Do they know about every breach out there without fail?
Working in fraud prevention and I would love to know how to detect a botnet system, beyond the usual velocity checks. A decade of working in this space and I haven't found a reliable, fail-safe way to do this. Genuinely interested to know if there's a suggestion.
A botnet is not a single system. It is a network of multiple compromised computers or devices, in all kinds of locations. Each piece of a botnet could be in a different country or even city.
HIBP only knows about breaches that are made public. Based on the current evidence, this was not a breach that was made public. It was a breach being sold.
>Other than allowing at least 14,000 login attempts from the same system
But from 23andMe's end, they didn't see these login attempts originating from a single system because it was from several locations, with different user-agents, fingerprints, etc.
How do you propose they were to identify these various computers to be part of the same botnet?
If you have a reliable way to do so, many people would pay you large amounts of dollars for that service.
Watching failed password attempts across the board is a simple metric. Correlate that with the ips responsible for the sudden increase and you have a starting point for your investigation and remediation.
> The term computer system may refer to a nominally complete computer ... or to a group of computers that are linked and function together
The only thing that I added was "that are distinguishable" because you implied that the server ought to be able to tell that this is a coherent attack by a single system and not just normal traffic from unrelated systems.
If victim hosts do not have enough information to recognize the disparate computers as part of a botnet, then from the perspective of the attacked host the computers are separate systems.
So you substituted the definition of "computer" for "system" then complained that my use of the word "system" somehow implied a single computer. Interesting.
This... isn't a response to what I just said. I very clearly just cited the Wikipedia definition of "computer system", and explicitly called out the possibility of a botnet composed of multiple computers. From the beginning, you were the only one who introduced the notion that I might have meant that a system exclusively meant a single computer.
I try to assume that people are interacting in good faith, but it's getting very difficult. Have a nice day!
I think you're being a bit dishonest with these replies. By any definition of the word "system", you're implying that they act in concert as a single unit. The entire point of a botnet is that none of the endpoints are individually distinguishable from the rest.
The word 'system' is ambiguous and has multiple meanings. In general, and as per HN guidelines, you should assume good faith, and when someone guesses incorrectly about your assumed meaning clarify your definition.
That said, I'm shadow banned so you should probably ignore my advice on HN guidelines.
Users reuse passwords. We've had decades of trying to educate people to do something else (use a password manager! come up with a unique password and keep it in a little book! anything!) but it simply hasn't worked.
That's an engineering fact. It would be good if it weren't true, just as it would be good if virtual memory were indistinguishable from RAM, but it just ain't so.
To be a responsible engineer, you've got to design and build for the real world, and that means not relying solely on username and password for extremely sensitive data.
> Nothing on 23andme’s end failed unless you consider someone using a correct user/pass combo while not being the owner as a fail on the part of 23andMe rather than the end user.
This seems to be the big societal discussion, in the same way that people blame banks for them sending money to crypto and romance scammers overseas.
Your analogy is unfair. There's not really any evidence any of the users were using the platform in any way other than intended.
I think this would be far more akin to finding out someone has stolen a card number, which has happened in breaches, and used it to purchase a lot. Generally, we do expect recourse on the bank's end.
It's not unfair. The data that was exfiltrated is data that was accessible to the users.
If someone gives their routing number and checking number to a scammer, that is also considered "using the platform in any way other than intended". In 99% of cases, you'd be providing that information to someone you had an actual business relationship with. My employer, for example, might have that info in order to process my direct deposit payments. A debtor may have that info in order to process ACH payments. Giving that info to a total stranger would be an issue but that wouldn't be the bank's fault. Neither would it be the bank's fault if you chose a poor or reused password.
That's what happened here. Users shared data with total strangers who requested their connection to their DNA data based on some percentage of shared DNA. Users accepted those requests. The users who reused their passwords had all their info accessible. The users who accepted sharing requests with those users had their shared info accessible. Both cases are "using the platform as intended".
> Nothing on 23andme’s end failed unless you consider someone using a correct user/pass combo while not being the owner as a fail on the part of 23andMe rather than the end user.
Well not recognizing you have 14k logins coming from the same place, possibly with a lot coming from someplace else than the last login on the account, is definitely a failure on their part. That's why more and more websites send you emails to allow logins from a new location. Or have login rate-limiters (too many request from your network).
I wonder how easy it is to have the location (at least country) of a user from the breached data, to use bots in the appropriate country and evade "login from a new location" protections. I guess easy enough if whole accounts have leaked.
If was indeed a DB leak, as claimed, some sites will have emails/hashes/passwords/last know login location/ip potentially. It’s not a stretch to think that a botnet could run from not only the same country but even the same region or city as the last known login or IP cluster.
This is why MFA (not SMS based) is so critical especially for services like this. I sometimes hate it when it is forced on me as a consumer but for these types of services, it absolutely must be enforced. Kinda cop out for 23andME to just blame the users even though users should take some blame of course.
In 2024, if you want to access a highly sensitive database, you must be forced to setup MFA at the minimum. My opinion.
Cambridge Analytica also didn't literally hack data or have it shared without a person doing something, right?
It seems the part where 14k leaked credentials provided access to millions of users data is where it becomes their responsibility. It means that people who were fully responsible still had their information leaked because of overexposure of the information.
We're talking about people who 'friended' others on 23andme, right? How is that responsible user behavior? I had an account with 23andme before I forced them to delete my data, which was not that difficult to do.
One of the things I remember was getting friend invites from random people who were distant cousins, and while I suppose that might be fun conceptually, I never did it because I didn't know any of these people. In what world does a "responsible" user who cares about their privacy add access to personal information, on a website that profiles your DNA, to people who are blood-related but still total strangers? I would call that highly irresponsible, personally. But that's just me, an idiot who avoided all of this by deleting my 23andme account half a decade ago.
There's no way that 14k accounts has 7 million friends actively accepted, that would be 500 friends on average. It must be that they reveal info about you to other people who have a genetic match and it was heavily scrapable with the stolen credentials.
They could force their customers to use MFA. I'm sure they considered it and decided they'd rather have the additional income. At the same time, my bank doesn't let me opt out of MFA. Why? Regulation. That's the answer in this case, as well.
Let me make sure I've got this right before I shred you.
14,000 users messed up. As a result, hackers were able to log in to 23andMe's computers as those users. (Is that the fault of those users? Absolutely.)
The hackers were able to use those logins to steal the data of 6.9 million users, approximately all of which did nothing wrong. How is that part not the fault of 23andMe?
They shared some of their data with the users who messed up. All of their info wasn't accessible. The only data that was accessible was the data that was shared with these users - in other words, opting in to sharing data with total strangers (which could be argued but is the #1 use case for 23andMe).
So if you want to let user A share info with user B (and as you say, that's likely an essential use case for 23andMe), then 23andMe either 1) cannot let user B mess up, 2) cannot let user A share with user B, or 3) cannot protect user A.
Of those options, 1) is impossible, though they could perhaps have done more to make it harder. 2) ruins a major use case. That leaves 3)...
Well, there were 14K hits from some list of leaked credentials. That likely means that someone was banging away for a good long time, mostly not getting in.
While there could be a raft of IPs working in concert, there should be enough commonality to simply be an annoying target, black-holing IPs that attempt more than a couple times.
Tech savvy and privacy-conscious enough to use TOR, but not enough of either of those things to both use a service like 23andMe, and not use password keeper or just be aware that reusing passwords is dangerous.
What a strange intersection of users.
I think you misunderstood my Tor reference. The person that exhilarated the data from the hacked site attempted to sell the data via Tor, not that Tor somehow leaked user’s data.
It's hard to detect credential stuffing. If people reuse passwords[1] they are going to have a bad time. Maybe they could have automatically locked accounts that appear in compromises, and while they should do that, I wouldn't go so far as saying they must do that.
Maybe they could have detected the exfiltration, but maybe they couldn't. If the hackers were smart they would have properly distributed the calls and rate limited to avoid detection.
>effective access to 6.9 million accounts
The relatives feature lets you -- if you opt in -- see your DNA relatives and their very basic details, and vice versa. I have literal thousands listed, and those thousands, all over the globe and of mostly minuscule relations, can see mine. That really is being a bit overwrought as a facet of this.
There are lots of ways to mitigate against credential stuffing. There are methods to detect botnets accessing your system at scale. There are products like HIBP that can help prevent credential re-use. You can prevent logins from unusual locations with an additional factor ("it looks like you're accessing this website from Croatia when you've only ever logged in from California, check your email for a confirmation code"). You can force MFA if you want to go nuclear.
I've done identity for bigger places that have credential attacks all the time. There's sophisticated attackers that are aware of each victim's location and can get through geolocation anomaly detection, and there's such thing as hitting the jackpot through lucky credential stuffing, so any check for failed attempts doesn't hit. It's not possible to detect everything. There's a whole lot things a serious place will do to detect naive attacks though, so a whole lot of volume there fails. It might even be good to let an obvious stuffer keep attacking you, and help us mark the accounts they have working credentials for, so we can instantly lock them and ask for password changes.
I have no idea of the actual sophistication of the attackers here though: It's way too common to see big companies that have paid no attention to prevention, and therefore will only notice an attack if it becomes an accidental denial of service attack. Maybe 23andme are sophisticated and only the worst shared passwords got breached, or maybe they have minimal security.
In security, it should be ALWAYS assumed that the users are naive and will use the least possible means for an account security, it is the responsibility of the service provider to enforce these policies, let’s see:
- Did 23andMe enforced a strong password policy during the account creation with X minimum and combination of chars with complexity meter?
- Did they send a periodic reminder about account security, update passwords, secret questions and the likes?
- Did they enforce the 2FA?
- Failed authentication attempts count?
And those on top of my head, NIST, PCI and other standards have more details about those, in fact, the security level should be provided by such services should pass more than the “standards”, as once these data are leaked, you won’t be able to change it, so blaming that in the users shows the lack of accountability, glad I never trusted my DNA in any of these services.
Requiring 2fa is the only real answer here here, your other suggestions are unnecessary red tape. (A strong password can still be reused. Periodic reminders will 100% be ignored. Failed auth count is silly because it falls back to 2fa, so just always require 2fa?)
Imagine you already know the passwords for many emails, or likely password patterns, from other sources. That's the kind of the attack we're talking about here. Also, those attacks are normally performed very slowly, and probably through botnets.
(And yes, 2FA is the only real answer here, preferably YubiKeys to also defeat phishing)
In case you refer to incognito/private windows: that's completely useless. All you can hope to get from it is automatic deletion of cookies. Google got a huge fine for tracking everyone in private mode [0] and you can be certain everyone else who didn't yet get dragged to court is still doing it.
> Well, you if want to both refuse that a site reliably identify you and have a flawless process for identifying you, you'll have a hard time.
and yet the digital certificate the government issues me means they can flawlessly identify me, when i choose for them to be able to identify me, rather than by them permanently tracking my machine.
Explicitly allowlisting a particular install of a particular browser isn’t a strong choice when we already have PKI.
Literally every time I pay for something via PayPal on my computer, I need to pull out my phone, find the authenticator app, open it, scroll to PayPal, tap it, see if there's enough time for this code or if I should wait for the next one, type the 6 digits into the site...
I mean it takes half a minute, and this easily gets repeated several times a day if you engage in a lot of transaction-type things. And it's no faster if it's by SMS or by e-mail because I'm still spending 15 seconds waiting for the message, and then opening it, typing, then going back to delete the message so it doesn't clutter my inbox -- half a minute total again.
Not to disagree with the cumbersome process - just want to point out that TOTP codes are valid for 30 seconds after the "expire" (60 seconds total). So as long as you are able to remember / copy the digits, there is no need to wait for the next code even if you don't have enough time to type it in. It will still work.
Tangentially, I really wish authenticator apps continued to show the previous code for 30 seconds so I can continue to refer to it for apps that don't allow copy and paste.
TOTP codes are actually valid for 90 seconds, 30 seconds either side of when it’s supposed to be displayed (assuming the display device’s clock is accurate to the second), to allow for up to 30 seconds clock skew on either end, in either direction.
I definitely had no idea! Thanks for that knowledge.
I mean there's never been any UX indication at all that that would be the case. I like your idea of showing the previous code -- that would make it very clear.
To be fair, the reason for this is to account for clock desync between systems, so it wouldn't be correct to say it is still valid for 30 seconds where it might not be in reality. Knowing what this actually means requires understanding the implementation of TOTP, so that you are not surprised in situations where it does fail. The existing authenticator app UX is likely correct for the average user.
> see if there's enough time for this code or if I should wait for the next one, type the 6 digits into the site...
In my experience on most services (not sure about paypal specifically) there's a grace period where a code that just 'expired' is still valid for another ~10-30 seconds? So... at least you can skip that part.
> 23andMe then claims that poor password practices are responsible for this data leak.
My account (now removed) relied on a long, unique, generated password + Apple SSO. I don't see how I could've made my 23andMe account more secure (I'm in the 6mm pool of users, not the original 14k).
You couldn’t but the access they have to your info is limited to only the info you shared with an account that reused their password from another site. If you’re not sharing info with people you don’t know or trust, you’re ok.
It's not the only answer. You can also use tools that detect and reject insecure passwords, integrate with HaveIBeenPwned to force-roll passwords that have been previously compromised, etc.
2FA would solve a lot of the problem, but it's not the only option that could have mitigated this.
If the passwords are leaked after the user creates their account with your service, you can't go back and re-check their password against HIBP (unless you're inexplicably storing your passwords as plain text or SHA-1). Using HIBP is a partial solution, but not sufficient to prevent a leak like this.
Mandatory 2FA is sufficient, but not very user-friendly.
That's another step in the right direction, but 23andMe is the kind of service that people create an account for and then don't use for years at a time. Still not a complete solution.
And I agree that mandatory 2FA isn't a good answer either. As someone who uses long, random passwords on all websites, I like to be able to choose whether to add 2FA on top.
> Using HIBP is a partial solution, but not sufficient to prevent a leak like this.
I didn't say it was sufficient to prevent this. I said it was another tool that would have mitigated some of this (and which presumably 23&Me did not implement).
A service that handles sensitive personal data should absolutely have mandatory 2FA. Calling anything else “reasonable security measures” is laughable.
This. Saying “It’s not our fault, it’s yours.” isn’t going to fly when the government comes looking. I remind everyone that there’s a new law about data breaches in effect. [1] The FTC will want to know what their security posture is and whether that meets HIPAA compliance rules or not. If not, they may lose their HIPAA compliance and be barred from applying, rendering their whole operation illegal under the FTC (s/you can only collect health information under HIPAA/you can but you still need to notify the FTC when you’re breached/).
This is a serious HIPAA violation not just a security breach. This defense of theirs isn’t a smart strategy if they want to stay in business not to mention the impending lawsuits.
23&Me is not subject to HIPAA, unless they are acting as a health care provider, or business associate (not sure but I don't think they are in this context).
Most people misunderstand HIPAA, and think it applies in situations it doesn't. This is not a situation where HIPAA applies.
HIPAA is NOT a privacy law. It's a law that mandates portability of medical data, some details of which overlap with privacy.
"Emerging technologies such as genealogical databases (i.e. 23andme and Ancestry) as well as wearable devices and mHealth apps have created a new risk for data privacy that is not covered by HIPAA. These digital health tools are not covered entities therefore they are not required to protect the data they collect under HIPAA. The Department of Health and Human Services nor the Office of Civil Rights have purview over this data or any breach of the consumer's information. Any complaint regarding a breach of consumer's health data is rejected, as there is no controlling law currently for this type of data. Complaints of this type go to the Federal Trade Commission; however, many consumers are never aware that their information is breached, shared or sold to a third party because there is no breach notification requirement in place."
So while 23&me is not under HIPAA compliance rules, they are still under the purview of the FTC according to this. Which would mean that the FTC can examine their security posture and determine if it's adequate or what have you. Odds are they will just be slapped with a fine and back to business as usual. Which kind of makes me upset because we are dealing with DNA and ePHI whether they are HIPAA or not.
In the unlikely event hackers were prosecuted for actually breaking into them, I wonder if there is any material in this release the defendants could use to their benefit: "the purported victim here says in their own words their system was secured properly".
The sad reality is they will be audited, found lacking, slapped with a small fine, told to implement 2FA as a requirement for all accounts, and go about their business. That’s been the precedent the last decade.
The potential impact of a breach doesn't just affect you, and their security decisions should be made with a wider consideration of concerns than the short lived frustrations of users.
It varies, I guess. For a normal end-user account on a system where no interaction between users is possible, it pretty much just affects you.
For some kind of admin account with privileged access to other users' data, then it definitely affects others.
One might expect increasing mandatory security measures correlating with increased potential damage of a breach. Similar to safety measures on mass transit vs. personal vehicles.
Your liberty starts where mine ends. By the same logic password complexity should be left up to the users as well, but what responsibility is this user willing to shoulder when they are the reason sensitive information leaks?
I’m sure most people on HN have great passwords stored in password managers, but 99.9% of users are not like that, so mandatory 2FA does not only make sense, it’s the only reasonable choice for sensitive information.
To be honest it's amazing they are still in business.
The company was originally founded on unreasonable goals in the health industry, using DNA array testing to identify risky variants in individuals to help produce better treatments.
It took the CEO about a decade to learn enough to acknowledge that their approach would never have achieved this, because the mapping from genome to risk/treatment is a highly complex function and their mechanism was underpowered and they also repeatedly pissed off and ignored the FDA who then shut them down for a while. The only reason they survived this was, afaict, the CEO's ability to extract money from google to keep operating.
Eventually, the company found that they could do identity by descent really well, much more useful to customers than telling them their earwax properties, and their "recreational genomics" products were extremely popular- enough to sustain a service, but not really enough to sustain advanced research.
They finally got some pharma to give them a bunch of money for their data (basically all the genomic and phenotype data that they collected on their users) ostensibly to do translational health research, but this has not been very productive (and seems unlikely to be truly transformative).
In the meantime they have to keep runing their consumer platform and it clearly had security issues that permitted a large scale data extraction (that's on them, not the customers) and I jusrt can't see how they keep getting money to operate, because their track record in translating data to profit/medicine has been so skimpy.
It cost me about $99 to get my DNA mapped by them a decade ago. I used a 3rd party service to sync my results with SNPedia which returned a pretty cool report, at the top of the report was a very dangerous gene I was the first in my family to discover, and has since saved lives. 23andme added that gene to their reports at least 5 years if not more later
My doctor scoffed at 23andme finding a dangerous genetic mutation and said its probably just a false positive. I had to spend $500 to get a single gene tested in a hospital, still came out positive.
So bang for your buck that $99 was a great deal for a full mapping, it feels like most of their issue is what the government allows them to show. Im pretty sure that SNPedia syncer isn't online anymore, but that was what made 23andme a great service for me
I'm happy that this found a dangerous mutation in your family and saved lives but what's at stake is unfortunately much larger and not worth the risk for most people IMO. You are risking the publication of the genetic makeup and vulnerabilities not just of yourself but thousands of people living today and will live for the next several generations, and to some extent in perpetuity. I don't want to condemn my family to that risk yet! I'm sincerely hoping a security first service that actually earns it's trust shows up soon!
Is there OpSec better than my local doctor or hospital? I feel like I hear about ransomware attacks on hospitals every week.
My DNA was also scanned[1] and saved by a hospital before having my first child - as most people with my heritage do. So thats 3 times in my life I did genetic testing. Which one is more prone to be hacked - Mount Sinai, LabCorp, 23andme? Who knows
I wouldn't trust any of them sure. At the least the 500 disorder panel is not by itself completely dangerous to your kin though - you can't map that via genealogy. Agree it's still dangerous. Honestly I'm not sure what I'd choose if I'm presented with doing this test.
Yes, and the reason for this is the tests in the hospital and qualified/validated. That is to say, they undergo much more stringent quality checks before they make a positive (IE, predict that you have a dangerous mutation) because the cost of false positives is extremely high.
When I decided to have my genome sequenced, I went with Illumina's Understand Your Genome. This was a project they had to basically get rich execs to have their genome sequenced, learn their risks, and then invest in VCs investing in genomic startups (although it was just marketed as a product, Illumina had a larger goal).
I had my blood taken and a whole genome sequence- a 50GB file of reads off the machine, along with variant call files that should show how I differ from the reference genome, and another variant file that called out risky variants.
You can download the files (https://my.pgp-hms.org/profile/hu80855C).
When you did UYG you'd go to this fancy spa in La Jolla and they give you an iPad with the files and you also talk to some genetic counselors.
I made a number of interesting observations when talking ot the counselors. The first is that they said they were confused by my report because it said I had absolutely no known risk variants (apoe, bcl, etc) and they had never seen that before. They also said, when they see some rare v ariants that they would just google for the variant and read random papers. What they said convinced me that genetic counselors, and genome tests in general, have limited applicability- there are a few genes where variants are clearly associated with negative disease outcomes, and the tests for those are very valuable (This is especially true for cancer, but other diseases as well) because they are clinically actionable.
But it also showed to me that counselors are making up garbage, because scanning the raw literature for variants and assuming that because a person has that variant they will be at risk, is not a good assumption. In my mind, the variations on polygenic risks scores have convinced me that we need to build large-scale (whole-genome) models of disease that use nonlinear functions trained on extremely large-scale datasets (like UKBB) to build up wholistic predictive models that do a better job of encapsulating the complexity of biology and its relation to disease, to the point where we can actually start making useful treatments and cures for a wide-range of genetically determined diseases.
>In my mind, the variations on polygenic risks scores have convinced me that we need to build large-scale (whole-genome) models of disease that use nonlinear functions trained on extremely large-scale datasets (like UKBB) to build up wholistic predictive models that do a better job of encapsulating the complexity of biology and its relation to disease, to the point where we can actually start making useful treatments and cures for a wide-range of genetically determined diseases.
This is exactly where AI is going in the biomedical space. However, there's more than just the genome, you need to integrate multi-omics and some of the necessary tech hasn't been invented yet.
Alerted by a friend who noticed his profile had changed, I checked mine. Going from having no French ancestry when first processed 5 years or so ago, I now have 12% French ancestry. That's a huge deal for me, because one of my parents was adopted. Hugely unimpressed at the lack of transparency (pro-active notification anyone?) and the overall accuracy - I don't trust these results anymore than the original, but the original did cause genuine distress.
On the other hand, my dad signed up for 23&Me and one day got a ping.. and then a few more... and a few. more. It turns out he had made a "donation" years earlier and apparently the donation was used to make a bunch of IVF which were then born. It was kind of amazing to learn that I had a bunch of half-sisters and brothers around the US.
IBD is pretty reliable and it tends to get better over time (hence the late-arriving signal of french ancestry). I don't understand the "lack of transparency" and the accuracy comment you made: they didn't have the knowledge before, so presumably they reported "european" rather than a wrong country?
All the other DNA test companies(MyHeritage, Ancestry) use DNA kits as a funnel to the more lucrative genealogy subscription services. But that market is pretty crowded and established, so 23andME is stuck in trying to sell DNA data.
It's all about branding. They got in early and really got their name out there. Everyone has heard of 23 And Me. Off the top of my head, I can't think of another brand (though I know they exist), and people I know IRL still refer to 23 And Me.
Well the ironic part is that their SNP array now would be fairly effective because genome imputation using 1k genomes etc has gotten so good. But now WGS is cheap enough that the value prop of using imputation is getting more and more marginal.
it makes me so glad to hear that, because my first reaction (well over a decade ago) to people using arrays and imputation is: they data is there, why are you estimating it probabilistically (realistically, at the time, whole genomes cost too much, and scientists iwll always do short term probabilistic estimation to get an approximate answer quickly.
The CEO, Anne Wojcicki, was married to Sergey Brin (founder of google): https://www.nytimes.com/2007/05/23/technology/23google.html
Her sister Susan was also an exec at Google and provided space for the company when it first got started. Sergey loaned a bunch of money to 23&me before that, and his loan was repaid when google invested (which sounds like a conflict of interest to me, but then again, Sergey is a bunch of conflict of interests).
Later, GV (Google Ventures) invested more in 23&Me. At the time, the head of GV was Bill Maris, who had worked with Anne before she founded her company (and I was an advisor for GV around that time; if they'd asked me, I would have said "don't put more money in this turkey").
So basically a little family of people who wanted to see the company succeed. I think they've been given their time to succeed and at best, have reached a sort of steady state where they are not going out of business, but also aren't achieving the interesting mission they were based on.
I'm surprised the health insurance industry isn't implicated here. Genome comparisons are a goldmine for creating risk models for individuals... I would even speculate that the whole thing was driven by insurance industry rather than pharma industry. "We're doing this for medicine!" looks like PR lipstick.
I would be be shocked if there aren't huge loopholes in GINA or they weren't doing this under the table. Like you sprinkle some derivatives of GI into your huge model and then "our model assesses that risk XYZ is likely..." without ever mentioning GI to the customer.
Our main healthcare "privacy" law in US, HIPAA, is structured to protect insurance firms' right to our private health data (while otherwise sensibly restricting access to it). It is not a given that private finance firms ought to have legally protected, virtually exclusive access to our sensitive health information, but they do in America. Facts like this make me skeptical that GINA was written and is enforced in good faith.
> I would be be shocked if there aren't huge loopholes in GINA or they weren't doing this under the table.
Sorry but you have no idea what you are talking about. Big corporations are absolutely terrified of accidentally using health data illegally, no insurance company in the US would touch this with a 10000 foot pole.
> Big corporations are absolutely terrified of accidentally using health data illegally
I am acutely aware of this, and am also aware that most "big corporations" have nothing to gain from mishandling/abusing PHI. But health insurance firms obviously do.
Another notable analog is the credit reporting industry. Despite serious & repeated abuses of consumer financial privacy, these companies consistently get off with a slap on the wrist. And we're supposed to believe that their neighbors, the insurance industry, are good guys from a radically different paradigm?
Credit scores, unlike healthcare plans, are not for consumers, they are for creditors. Consumer loans are the real analogy you are looking for.
The domain of private financial & life info they have direct legal access to is pretty absurd. Many creditors will additionally ensure a background check of their customer prior to finalizing a loan. What "private" data remains sacred at this point? Why do we have regulations like ECOA in the first place? Because of many such abuses.
On one hand, I have to somewhat agree with 23andMe here. If someone uses the password "password1" for some service, they should not be able to turn around and blame that service when their account is compromised.
On the other hand, 23andMe should have definitely done much more to reduce the blast radius of this attack. Mandatory 2FA, disallowing known-compromised passwords, geolocation of login IPs, etc.
I guess the question shakes out to: where do we draw the line on personal responsibility vs. service responsibility? Services can't be responsible for 100% of user security. But they also can't be negligent in their own security and mitigations.
Where your first argument falls apart is the other several million users who had their data breached didn’t have compromised accounts. It quite literally isn’t their responsibility or problem at all.
When you send people secure information through any means, you are accepting that the security of that information is now equivalent to the worst level of security practices of all the people who received that information. When the recipients are a large collection of random strangers on the internet, you should assume the data is as good as public, because the average random stranger on the internet has terrible security practices (not to mention the non-zero probability that they're actively malicious).
23andMe could have done a better job communicating the risks of sharing your data with random strangers on the internet, but it's also not unreasonable for them to put some level of blame on users. If you wanted to treat that information as secure, you shouldn't have opted in to sharing it with an arbitrary number of strangers.
I'm not sure I see your point unless you're saying that opting into a feature a site offers also means they are taking on implicit risk of that feature causing a data breach through no fault of their own? Again how is that their problem and not the website's?
Really? Seems analogous to sharing, I dunno, a private Google sheet with somebody. If my friend's Google account is compromised, because they reused a password lost on another site, I wouldn't blame Google for the attacker being able to read the sheet. I shared it with my friend knowing that might happen.
Of course it means exactly that. If you opt in to a feature that shares your data with other users, you're explicitly opting in to the risk that those users will steal/leak/share your data with a third party, either intentionally or through their own negligence.
If 23andMe had made any claim that they took steps to force users to secure their accounts properly or that they implemented measures to prevent data exfiltration then perhaps you could argue that you should have been able to rely on those claims, but as far as I can tell 23andMe made no such claims.
Not gonna bother responding to the turfer comments in this thread but they're roughly analogous to saying "because you use facebook it is therefore your fault when facebook leaks your data because you voluntarily shared that data with facebook."
Legally, logically, and ethically this is an absurd argument on its face.
To borrow your analogy, though, Facebook (23andMe) didn’t leak anyone’s data. That’s the issue with your position.
Also, the turfer comment makes you seem like a conspiracy theorist. There’s nothing untoward or off about the replies you’ve received so far that is off enough to suggest astroturfing.
Who did then? It happened on their site, they had the means to control/monitor/mitigate it. Are you saying then that if someone hacks into facebook, steals data, it is then not facebook's responsibility that that happened because they didn't publish the data?
Even the backwards cybersecurity laws in the US don't work that way.
I think you’re misunderstanding what happened in this situation. Nothing was stolen from 23andMe and no 23andMe accounts were “hacked into”. The “hack” happened on another site and the hacker got a database leak of usernames and passwords for that site (not 23andMe). Some of the users of that site used the exact same email addresses and passwords for their 23andMe accounts.
If you use “Hunter2” as a password for all of your accounts and AOL gets hacked, the hackers know your password is “Hunter2”. If they get into your Facebook or Gmail account because you also used “Hunter2” there, that is neither Facebook’s or Gmail’s fault. It is your own fault.
So now we're playing a semantics game about what the word "hack" refers to in this context? users gained unauthorized access to 23andMe and used that access to get access to data, and 23andMe had full control of mitigating, monitoring, and preventing this type of attack. Is that better? Doesn't really change my salient point at all.
In your example the site is fully capable of preventing weak passwords or enforcing things like MFA that make this type of attack a lot less effective. It may surprise you to know that most websites already do this!
No, we're not playing a semantics game. The access wasn't "unauthorized" if the person that "hacked" it was using the person's right email and password. MFA was also available and the hacked accounts did not have it enabled. It's not 23andMe's fault that users reused passwords and chose not to enable MFA. This isn't about weak passwords or passwords that were known to be leaked on sites like HaveIBeenPwned. Was there more they could have done? Of course. Is it their fault? Absolutely not. Are they liable in any sort of legal sense? Absolutely not.
Not the GP but that's clearly the case IMO. If I share my tax records with my accountant and their office is broken into because they left the office key sonewhere, do I blame the office building or the accountant?
We can argue about whether the office building should have had better security and noticed weird people around, but ultimately it's the accountant's negligence that allowed my info to be compromised, and if I suspected they weren't the best with their security, I should have factored that into who I decided to share my info with.
Because the breach wasn’t on 23andMe. It was on another site.
The only way you’d be affected by this is if you used the same password on multiple sites (where one of those sites actually had a breach) or if you shared your DNA profile (since it is opt-in) with someone that reused passwords. In the latter case, only the info you shared with that person would be accessible to someone using a compromised account.
In other words, if you shared your info with someone you didn’t know and didn’t trust, your info can be used by bad actors.
Do you share your banking info with complete strangers? If not, why would you share your DNA with them and be surprised that they could do something stupid, foolish, or reckless?
I'm not sure I see your point unless you're saying that opting into a feature a site offers also means they are taking on implicit risk of that feature causing a data breach through no fault of their own? Again how is that their problem and not the website's?
Isn't that obvious? If I share a Google doc with you I'm taking the implicit risk that your account might be compromised along with my document when you e.g. forget to log out at a public computer.
I think it is unreasonable to blame site when you allow sharing data by certain separations. Like in social media set your friends' friends to be able to see your data. If one of these friend's friend is bad actor. Is it really on the site to prevent them seeing the data?
Those users signed up for a service with poor security controls (no 2FA, no requirement to rotate passwords at regular intervals) and then checked a box saying "share my data with other accounts."
So while I agree with you that those users are not responsible for the accounts that were actually compromised, they were fully responsible for sharing their data on that service without fully thinking the implications through. 23andMe is not blameless--it's their poor security controls that allowed it to happen in the first place--but I strongly feel people do not take security and privacy as seriously as they should and as a result do share at least some of the blame.
Is that true though? I agree they're annoying and in an ideal world where users don't reuse passwords or leaked hashes can't be broken they'd be pointless--but in this case I think it certainly would have protected at least some of the accounts that were reusing breached passwords. Is there actual evidence/research that proves password rotation has no effect on security in the event of breaches?
>in sum, these security-specific observations and the results in Section 3 suggest the security benefit of password aging policies are at best partial and minor. Combining this with the well-known and widely experienced (negative) usability impact of password aging policies, and results [18] mentioned earlier on high predictability of new passwords from knowledge of old, the burden appears to shift to those who continue to support password aging policies, to explain why, and in which specific circumstances, a substantiating benefit is evident.
>Although change regimes are employed to reduce the impact of an undetected security breach, our findings suggest that they reduce the overall password security in an organization.
There have been several more, and I'm sure that NIST and others did their own additional analysis prior to changing their recommendations which may not have been made public.
Fair enough. Seems like the conclusions drawn are not that it doesn't improve security, rather it does not improve security enough to justify the added burden to users and support staff.
I'd venture that this 23andMe situation is one of the scenarios where password expiration could have significantly improved the outcome, but I concede that it was a poor example for me to use.
IMO the problem is that even if you have the best most secure alphanumericpunctuated leetspeak 20 character password, your data was still compromised if your third cousin once removed had "hunter2" as their password.
It's like Cambridge Analytica- each compromised account let them dump data for hundreds to thousands of people
>your data was still compromised if your third cousin once removed had "hunter2" as their password.
And you opted-in to share your DNA data.
But yes, the entire business model of 23andMe makes me uncomfortable. But it's a bit removed from the password security stuff I wanted to focus on, especially as the password security stuff is applicable to any type of service.
You don't even need to bug your users with those pain-in-the-ass 2FA. Just don't let them chose a password, send them a strong random one by email when they signup. If their mailbox is compromised, it is game over anyway as it allows an attacker to reset every password.
> On one hand, I have to somewhat agree with 23andMe here. If someone uses the password "password1" for some service, they should not be able to turn around and blame that service when their account is compromised.
I call BS. If the service thinks the user's password is acceptable to perform authentication, how should a user know they are actually wrong about that?
Either it is flawed, and therefore the service's job to catch, or it is acceptable. But the service doesn't get to say afterwards "haha, that was really dumb of you, you should have used a stronger password".
>Either it is flawed, and therefore the service's job to catch, or it is acceptable. But the service doesn't get to say afterwards "haha, that was really dumb of you, you should have used a stronger password".
You are missing the category of attack that happened here.
The password was acceptable. But the users used the acceptable password on multiple websites. A different website was breached, and the password was leaked.
It is not 23andMe's responsibility to check if other services are breached, cross-reference the users in that other service, get the leaked password list, and then see if those leaked passwords are currently in-use on their website on accounts that are used on both sites.
However, as noted in my top-level comment, they should be checking against known-compromised passwords at password creation/change time, and disallow those.
> It is not 23andMe's responsibility to check if other services are breached, cross-reference the users in that other service, get the leaked password list, and then see if those leaked passwords are currently in-use on their website on accounts that are used on both sites.
To play devil's advocate here, why not? Plenty of companies (e.g., Tumblr) specifically do this and require email verification + password change if yours was breached.
It would make the world a better and more secure place if companies took proactive security measures. There is even a financial incentive for them to do so because it mitigates risk.
>It would make the world a better and more secure place if companies took proactive security measures.
I _absolutely_ agree. I just do not think it is possible to require every company to monitor every data breach, check those breaches for emails that are in-use on their service, check the passwords (not always possible), and then require a change if the password matches.
>Plenty of companies (e.g., Tumblr) specifically do this and require email verification + password change if yours was breached.
You're saying that if HackerNews was hacked and my password was leaked, that Tumblr will ingest the breach data, cross-reference if I have a Tumblr account, and then have me change my Tumblr password? Are you sure? Do they have a documented process on how they do this?
Edit: I've spent some time now looking at the Tumblr website and do not see any indication that they do this, but would be happy to be corrected. Or a link to any company that does this, it doesn't need to be Tumblr.
I think 23AndMe could perhaps have done better in detecting this sooner – "perhaps" because with a sufficiently large botnet that's not so easy to detect quickly and details are not available AFAIK.
But other that that, I ... kind of agree with 23AndMe: users should be primarily responsible for their own accounts. I don't like the "assume all users are blubbering morons and treat them as such" security model, and then blame $corp for treating their users as adults. Again, 23AndMe could have done better, maybe, but I strongly disagree that they're primarily responsible – at best they're partly responsible.
And maybe 23AndMe also could/should have pushed 2FA harder, I don't have an account so don't know how hidden this feature was or not. All I know is that mandatory 2FA is a right pain for me, adds basically no security for me because I just store it in my password manager next to the password. For TOTP it's just an inconvenience, but I really dislike phone-number based 2FA – I've been locked out so many times...
> Such information would only be available if plaintiffs affirmatively elected to share this information with other users via the DNA Relatives feature.
I'm skeptical that 6.9 M users opted-in to an off-by-default setting. That seems absurdly high for any opt-in feature that involves nebulous user value. I don't use 23andme, but I'd love it if someone had screenshots of this supposed "opt-in" before the data breach.
Also, how far does the sharing go? How far removed from a family member does a user have to be to see their info? Going from 14k to 6.9M seems like it must have been more than just immediate family, given the small family size common today.
It seems pretty plausible to me, because finding relatives -- especially unknown ones -- is one of the 3 main use cases of the product (the other 2 being understanding your ethnic background and health/disease markers).
They have ~14M users total, so this is 50% of them opting in. That seems entirely reasonable.
But how does the hacker get the DNA data? A feature to find relatives is one thing, exposing all those relative's DNA to the current user is another thing entirely. All the math would be done on the backend, not exposed to the end user, no? Or is the implication here that by getting DNA on the 14k compromised users, the attackers can draw meaningful conclusions on the DNA of those in their network?
> But how does the hacker get the DNA data? A feature to find relatives is one thing, exposing all those relative's DNA to the current user is another thing entirely.
They didn't, unless I'm misunderstanding. They just got some names and a number indicating the degree of genetic similarity or something like that, right?
This is 50% of them opting in AND being considered relatives to the 14k who reused vulnerable passwords. For all we know, there may be 99% opt-in but only half the opt-ins are "relatives".
The opt in is a prompt presented during onboarding with language that mainly focuses on “connecting and “exploring” who you’re related to.
The prompt is similar in mobile but here[0] is a screenshot of what it looks like on the web.
The fine print talks a little about what you’re sharing.
Where the lie comes in is that after selecting “Get Started” it also automatically enables sharing your ancestry report and the other default privacy settings are very permissive and work on an opt-out basis.
If doesn’t, for example, then give you through the settings and asks what you want to share. It enables everything by default.
> Also, how far does the sharing go? How far removed from a family member does a user have to be to see their info? Going from 14k to 6.9M seems like it must have been more than just immediate family, given the small family size common today.
Looking at mine, it shows me 4th cousins and closer. So way more than immediate family.
The user value isn't nebulous. If you are curious about your ancestry, you might also be curious to see what people you discover or see if your family tree is your real family.
Yeah, that's a pretty dumb movie on their part to immediately mitigate the attack and show the whole world it was possible the whole time and it was via an industry accepted best practice.
Why people don't get that TOTP is just "strong unique password" you can enforce from the service provider side is beyond me.
As mentioned in the article, a few mitigations could have been applied to mitigate, though not eliminate. None of these are perfect, nay sayers will pop up lamenting "it wouldnt work" but the point is it would help.
1. Fraud detection on the metadata like IP address, access timing, access patterns etc. eg: Why is a person from UK logging in from China IP?
2. IMO orgs should be importing and refusing known leaked credentials and the top 1000 passwords. This could happen both at password set time ("You cannot use that password as it's a known leaked credential, click here for more info about the breach"), or at login time "You're using a leaked credential, please follow the password reset flow".
> Why is a person from UK logging in from China IP?
And then we get to the other side of this where people get locked out of accounts because they went on vacation and bothered to check their email.
And often times these "person from UK logging in from a China IP" are massively wrong. For the longest time my home IP was showing up as from another country in most GeoIP databases. They're routinely trash.
The only good thing I have to say about lastpass is that they allowed you to allowlist countries you wanted a login from. Like calling your credit card company, I'd login and add a country if I was traveling.
That's an example of what I was talking about though. I set LastPass to only let me sign in from the US and suddenly I was locked out at home because it thought my home IP address was non-US despite definitely being in the US at the time.
Yeah security is always a tradeoff of convenience and control. If I were in your shoes i would have allow listed the IP. either switch to my phone, went to a coffeeshop, or a friends house, or the library, or the gym, or mcDonalds, or Target...
Remember that when you are giving your DNA to a company, you are also partly doing it for all your family members. Maybe talk about it with them before doing it, especially if it is just for fun.
So I'm a user of 23andMe. I have DNA relatives on (dont find it that useful though), but I don't really see much "DNA" in terms of "my relatives" (I've also never accepted anyone to see any of my profile, I just see who they say are relatives and how close they are (first/second/third cousin and so forth).
I assume most people are the same.
Therefore, I'm not sure what significant information an attacker could have gotten on me. Anyone care to enlighten me?
This is someone who tried to connect with me and it says.
"By connecting you will be able to explore each other's personal and genetic information, which may reveal surprises. Learn how sharing works."
And if one clicks on the learn link, one sees.
"Establishing a sharing connection on 23andMe allows users to view one another's profile names and information (including profile sex), information from compatible reports, your predicted relationship, and the number and location of overlapping DNA segments. A sharing connection does not allow either person to search or download the other person's raw data, access their DNA Relatives list, or if applicable, view reports that require an additional consent, or view and download the other person’s Reports Archive."
I never established any sharing connections and assume the same for most.
It's all a balancing act between not wanting to unduly impact legitimate customers, while blocking as much fraud as possible.
Blocking the credential stuffing attacks? They probably did have mitigation efforts, but you can only be so aggressive before the false positives start blocking significant numbers of legitimate customers, who have no recourse except to wait out a temporary ban. And some credential stuffing attacks are extremely sophisticated, such that even best in class security companies can't always effectively block them.
Mandatory MFA? Great on paper, except that 10% of people hate the extra steps (probably with great overlap between the people reusing passwords) and will complain and/or disable it if given the chance. Another 20% have invalid or out of date contact details (an old employer's email address, a landline phone number that can't receive SMS, etc.), and they'll be locked out of their accounts.
Yeah, there are ways to mitigate these downsides. And I'm not arguing that 23andme found the appropriate balance between "customer satisfaction" and "customer security." But I can see how a mostly reasonable organization could end up in this position. And it's mainly the risk of terrible press and upset customers that allows other companies to justify more security-oriented policies, so let them have it.
Every other healthcare website I use requires me to use enter a code texted or emailed to me the first time I log in from a new computer. If someone used a corporate email address for personal services, they're likely used to being locked out of things.
23andme is also unique in their ability to create security questions to authenticate users who get locked out. "What is your date of birth and can you form a sideways U shape with your tongue?"
They prioritized earning new business and lowering customer friction over enforcing MFA. They also had no idea 14k accounts were brute forced and cred stuffed, so that's 100% on them. They have enough money to do the security work.
Its not like they dont already provide this to police and governments without a warrant and sell the data. If you expected your most personal data to be secure with them you havent been paying attention.
It's not clear to me, due to the particular nature of the data in this breach and why it was available to the 14 000 compromised accounts, that it getting breached will actually cause any damages.
This is data that appears in the ancestry data for the 14 000 compromised accounts. Your data only appears in another account's ancestry data if you opted in to sharing ancestry data and they are a relative of yours. I think most people opt in, because (1) finding out about your ancestry and relatives is one of the main reasons people use services like 23andMe, and (2) even people who started using it just for the health data often get curious and start using the ancestry stuff too.
23andMe counts anyone who is a 4th cousin or closer as a relative, which results in some big relative lists. Mine has 1500 other 23andMe users in it, but that might be above average. Based on 23andMe having 14 million customers, 14 000 accounts being compromised, and 6.9 million accounts having data taken via the relatives lists of those 14 000 compromised accounts, and assuming that everything that I don't have any data on is is pretty evenly distributed (the statistical equivalent of a spherical cow) I'd guess that the average is around 700.
If that's even in the right ballpark then when you opt in to sharing this data you are opting in to share it with several hundred people, mostly complete strangers to you, mostly scattered all over the US and a few foreign countries.
At that point I'm not sure if the different between just sharing it with them and sharing it with the world is meaningful.
if it was truly a credential stuffing attack, then there's a shared responsibility between users and 23andme. 23andme is responsible for not enabling 2FA. the users are responsible for reusing passwords.
to me, the takeaway is that we need to roll out passkeys as quickly as possible.
The service requires authentication, but then does not take the obvious steps (industry mediocre(1) practices) to ensure widely known problems with authentication are mitigated - let's not blame regular people for the failure of this service to secure their accounts.
(1) the bar to do better is quite a bit below "best practices".
11k people due to credential stuffing is not a dent in the 6+ million though. It's a disingenuous argument.
Not only that but they're should have been far better protected against even poor password management by users given the type of sensitive information they're handling.
> users negligently recycled and failed to update their passwords following these past security incidents, which are unrelated to 23andMe. Therefore, the incident was not a result of 23andMe’s alleged failure to maintain reasonable security measures.
Why? You could have a 100% secure website, but if the user gives their credentials to someone else (another website in this case), and that website with bad security gets hacked and leaks the credentials, how is that their fault?
Because we've known about credential reuse for 20+ years, developed multiple means to keep a site secure when it happens and then chose to not employ those security measures on data people broadly consider incredibly sensitive.
It is your job as a service provider to not allow access to anyone but the authorized user, how you do it is an implementation detail. You can't throw up your hands and say "well we decided that doing that is too hard so we're defining the authorized user
as anyone who knows the password."
How could they not know this? This is common knowledge for anyone involved in online consumer security. And the correctness of this common knowledge is beyond dispute, with numerous publicly-known breaches traceable to this practice, as well as the development of scaled, repeatable attack methods associated with it.
The only way they could have not known is if they failed to employ or consult with the appropriate professional expertise in this area.
> This password wasn't found in any of the Pwned Passwords loaded into Have I Been Pwned. That doesn't necessarily mean it's a good password, merely that it's not indexed on this site.
But I doubt it's a common practice to do this kind of check.
Yeah it's sure not common yet but in a 2021 post Troy Hunt says it was already thousands:
> literally thousands of other services doing everything from providing their own password checker through to checking their customers' passwords on every registration, login or password change to see if it's previously been breached
Right, but thousands is a minuscule number - there are myriads of services out there. While it would've been nice if 23andMe would've used a service like this, can't exactly blame them if they didn't - it's not like this is some well-established industry standard, but more like an extra effort by extra security-conscious companies.
What I'd blame 23andMe for, is having encouraged so much all that sharing of sensitive data with other accounts, without providing additional safeguards to access such data.
I don't think it's a good idea to depend on yet another third party domain in this way.
The check has low value.
HaveIBeenPwned only provides value when it positively informs about a pwned password, otherwise it says nothing useful.
If you're already rejecting weak passwords using some heuristics, then the remaining passwords are unlikely to show up on that site, because, strong passwords are unlikely to be pwned, even if reused and subject to a breach.
also there is no way to know if the usernames from the leaks are the same person as your own users. otherwise, you’re essentially saying that you have to pick a unique password that nobody else in the world is using.
If everyone in the world were using 100 passwords, you'd need space for 800 billion passwords, which is 39.5 bits of entropy. Assuming an alphabet of 74 characters (lower + upper + digits + a few symbols), you'd need a little over 6 characters to make 800 billion unique passwords. A random password of 9 characters has a very low chance of overlapping any other password in the world: somewhere around Sqrt(74^9 / 74^6.2) ~= 0.2%. That's a 0.2% chance of overlapping any other password -- multiply that by the probability of having your password leaked for the chance that it's on the blacklist. Make it 10+ and you're fine.
There is no way to know or verify what is or isn't a real leak, and what passwords are or aren't leaked. We can't settle this in the comments, and I think it's just security theater.
1. User x signs up for Service A using pwd abcd1234
2. User y signs up for Service B using pwd abcd1234
2. User z signs up for Service C using pwd abcd1234
Service A is breached. Users x,y,z get emails forcing a reset of their password. Users x,y,z reset their password to abcd1245 and all is good (among many other permutations). Now Service B is breached, rinse and repeat.
Depending on the workflow (the more automated it is the worse off we are), there are security vectors to exploit here by doing a carefully planned fake 'leak' using mostly useless emails but only a few genuine emails to trigger pw resets, etc, etc.
I'm not sure what you're talking about, you definitely seem to have misconceptions about passwords, their math and services like haveibeenpwned.
You seem to like simple passwords and think that people should use them so long as they're not leaked.
Well no, simple passwords are never good; a probably very large number of the possible ones has already been the subject of some leak, and they're very risky in any case.
Service providers (should) strive to give some protections even to users with bad passwords, but you sure can't rely on that.
If the generator is really random, getting back such a simple password is definitely possible, but as unlikely as winning a lottery
(ok maybe not one of the hardest ones)
> Can you decribe a mechanism to verify that users in an alleged leak are the same users as your own?
What I offered was the mechanism I would use to solve this problem, which is a probabilistic approach rather than a deterministic approach. What you described is likely infeasible and impossible, so rather than get blocked on that, I moved to an alternative angle.
There is no need for a mechanism in this case; building such a thing would be a waste of time.
Honestly, I was trying to respond to your question, and perhaps I was too terse. But I don’t think the mechanism you ask for is possible (or desirable) today. In a future scenario with some type of universal identity there would probably be no need for such a mechanism.
On the one hand, I sympathize with anyone whose data is stolen in a data breach, we've all been there (and some of us have three dollars from Equifax lining our pockets to prove it).
On the other hand, I remember thinking ten years ago or whatever: "23AndMe sounds cool, I'd love to know about my ancestry and genetic risk factors, but that's a crazy amount of intensely personal data to trust a corporation with, so I guess I won't do that." And I'm as dumb as a rock, so if I made that decision with the same information as everyone else, it must have been pretty obvious what the consequences could be.
If you mean just DNA analysis I suppose that's possible, but unlikely. Genealogy involves your links to specific other people and thus is impossible without storing data.
In order to use DNA tests in genealogy, you need to know every segment on every chromosome that matches. Matching is not a go/nogo proposition. There are degrees of matching that depend on the biological relationship. Ex: Parent/children average 3,700 cM (centiMorgans), siblings are 2,600 cM, first cousins 900 cM etc.
Surely that's not the whole point. Genealogy is not the only use case for genetic testing. Health information is another pretty major niche. Curiosity is the third contender. And there's probably more.
6.9 million accounts had information stolen because they were "relatives" of 14,000 users? Something doesn't add up there. That would mean each of those random users had 492 "relatives" on the platform. I've never used 23andMe for fears of exactly this, but they should look at recalibrating what the term "relative" means if you're opting in to sharing genetic information. The average Facebook user has 338 friends, as a point of reference, and I sure wouldn't want my information shared with those people.
They go out to 4th cousins. I've got 1500 people that 23andMe says are my relatives.
> 6.9 million accounts had information stolen because they were "relatives" of 14,000 users? Something doesn't add up there.
It adds up. The key is that for the attackers to get my data they only have to compromise 1 of my 1500 relatives.
14 000 out of 14 000 000 accounts were compromised, so 1 in a 1000.
In other words the attacked has 1500 chances to roll a 1 on a d1000 if they want to get my data. The probability they can do that is 1-(1-0.001)^1500 which is 0.78.
If everyone had about as many relatives as I do, we'd expect the attackers to get data on nearly 11 million people from those 14 000 compromised accounts. Getting "only" 6.9 million suggests that on average people have a little under 700 relatives.
23andMe, Ancestry.com, and similar sites have extreme appeal to some of the oldest and least technical users. It's grandmas and grandpas who are willing to pay $20/month for a DNA site. The Venn diagram of DNA site users and online/telephone scam victims has to have a big overlap.
This is a tough population to increase the security for. They are highly vulnerable to social engineering, reuse passwords, use weak passwords, and struggle mightily with 2FA or other methods. But that's the gig, it's on 23andMe to solve it.
You always know you’re in a strong position when you have to resort to “stop hitting yourself” or “don’t make me hurt you, I don’t like it when you make me hurt you”.
> “Therefore, the incident was not a result of 23andMe’s alleged failure to maintain reasonable security measures”
In all honesty, you can hardly make this claim unless they properly communicated and mandated (at least in writing, since I can't imagine how it could be actually enforced) that users chose/pick passwords different from other platforms. Or at the least enforce an aggressive password change schedule, etc...
>at least in writing, since I can't imagine how it could be actually enforced
You can check passwords against known-compromised lists and then tell the user "sorry, please use a different password". This is something that is a recommended best practice, and has been for at least a few years.
>Or at the least enforce an aggressive password change schedule
This has been explicitly not recommended since at least 2016 by NIST. Research has shown this leads to password fatigue, which results in weaker passwords that are just iterated on (password1 -> password2 -> password3).
Let's not fall for 23andMe's attempts at victim blaming. They offered the service, and they failed to implement to reasonable security practices. Their process allowed users to pick "obviously" flawed passwords. Well, those passwords weren't obviously flawed enough to bar their use, but obvious enough to blame users afterwards.... yeah that's BS.
23andMe is such a bizarre service to me. I have absolutely no desire to find out the details of some long lost cousin I don’t know, or know there is a chance I’m 2% Viking. Or have the chance to develop some weird kind of cancer because some post doc working in a papermill published a paper.
I would have never guessed people would be interested in such useless information.
As the family genealogist, I completely agree. 23andMe had such minimal utility in comparison to the first service where I tested. At most, it corroborated and confirmed what I already knew and had been told. Also I identified the strongest possible genetic match, a person who hadn't tested elsewhere.
But 23andMe's cutesy haplogroup classifications, their faux historical narratives, the ridiculous litany of "health conditions" they warn me about, it's bunk, 100% bunk. Plus, no medical professional will accept this data as diagnostic, so why bother?
Is there anywhere to get a DNA test anonymously? I assume maybe you could put false information into 23andMe, but I assume they'd still have your name from your credit card.
So from now on does everyone here check each login from any user against known vulnerable password lists or known leaks? As new compromises might appear after password has been set or changed...
More likely this will prove to be another example of individuals under appreciating their personal data and privacy, and over appreciated some kind of novel technology.
That's a great title. I love how they've encapsulated judgment, jurisprudence and execution in a single sentence! Mastery of "journalistic" hangfolken. hahahah :)
> The firm claimed complainants had “negligently recycled” login credentials from other exposed accounts and that poor cyber hygiene practices were to blame for their exposure during the breach.
I mean. If people reused passwords for 23andMe, is this really 23andMe's fault? Should they have required 2FA for everything? That's kind of a hard sell tbh.
So, to play devil's advocate, 23andMe shouldn't allow credential stuffing to happen by rate limiting logins after so many failed logins, or locking accounts after failed logins.
They could've blocked source IPs making all the login requests, but that was probably being changed to not set off alarms. However, there wasn't enough information in the article to go on, but since they suffered so many breaches of user accounts, they probably had to do something wrong. I'm too busy to dig into the specifics.
Continuing devil's advocate, we don't know that they didn't do those things. Maybe they did and that's why it's only 14,000.
If an attacker works off lists of compromised usernames and passwords from various sites, for a given user there's always a chance that the first one they try is correct. Especially for the kind of user who recycles the exact same password on 50 different sites. (Incidentally, you should probably try to hack that kind of user first, and for each user, try their most-recycled passwords first.)
I don't think this is a smoking gun. Assuming competence, they could have obtained a password dump from another site that stored cleartext passwords, then compared their own database rows with hash(salt(leaked_password)).
It's easy and could be just how other companies with great security practices (like google) can tell you when they find your password in a password dump.
Imagine you are a company with great password practices, how would you tell that a user re-used a password that was exposed in some other data breach without you being able to generally know what a user's password is? Well, of course, it's the same way that you verify their password when they login. You track (or more likely in this case, adhoc check) data breaches, when you find a matched email in the breach with one of your users, you check if the password from the breach would allow that user to login.
I think one fair criticism is they could have had intrusion detection trigger when, presumably, the same IP address was logging in to thousands of accounts. But who knows how sophisticated the attack was?
[edit]: there are other obvious heuristics that could have detected it, it does show they had either very basic or no intrusion detection, which, for a service of this nature, isn't really acceptable
If an average online gaming service or social network can manage sending 2FA codes over email or SMS when their users are logging in from a new device, the holder of a large chunk of the world's genetic data can as well.
> If people reused passwords for 23andMe, is this really 23andMe's fault?
Yes absolutely. If it were one account being compromised due to the user reusing a password, that would be the user's fault. But what happened is credential stuffing, and that is absolutely something a professional IT organization should be prepared to defend against.
I agree with this. Frankly, if something is to blame, it's user agents / browsers that should be much better at educating and alerting about password reuse.
The testing kits may be the "best" - their data retention and security just sucks. I (stupidly) did 23andMe a while back, I've since requested my data to be deleted. Who knows if that's even honored.
I never trust that someone has deleted data. There is NO way of knowing that they did. Did they delete from the active database? What about clones of the data to DEV? What about back ups? I seriously doubt any company restores a back up to purge data only to make a new back up of that. If they did, do they purge the original back up and only use the new?
I don't care if something like GDPR states they must. I do not trust corps to actually go through the hassle/expense of it.
Completely agree, and I am continuously frustrated that the US has such poor personal privacy standards and regulations, regardless of how effective any existing regulations are.
> After disclosing the breach, 23andMe reset all customer passwords, and then required all customers to use multi-factor authentication, which was only optional before the breach.
Often on discussions about 2FA and IP address checks on HN, there is a sizable contingent that is frustrated with how ever more security impacts how they want to use the product such as people wanting to be able not to own a smartphone while still doing online banking or using their credit cards overseas.
Add in all the people who struggle to use 2FA of any kind. At my first employer, I was there when they implemented it and it basically destroyed an entire week of productivity as so many people struggled to grasp how to set up a token in the authenticator app and use the token. I would be curious to know what the stats are on how 2FA impacts use and churn of users.
I can definitely understand this argument - imagining my dad setting up even SMS-based 2FA makes me shudder. However, for information this sensitive, it would have been smarter (imo) to strongly encourage 2FA, along with tutorials on how to set it up (articles, videos), and finally to add an option to not use it with a BIG SCARY WARNING and a consent checkbox.
Ultimately, companies like this are making the choice of information safety vs profits - it’s a tale as old as the free market.
If you read the article, it discusses how the initial breach was due to users using same password for 23 as on other sites. It’s really very difficult to guard against that, no?
Just a reminder that despite whatever their front page advertises, while your account will be marked as "deleted" (23andMe still retaining your email address and some other pieces of information), your genetic data won't be deleted:
> 23andMe and the contracted genotyping laboratory will retain your Genetic Information, Date of Birth, and sex as required for compliance with legal obligations, pursuant to the federal Clinical Laboratory Improvement Amendments of 1988 and California laboratory regulations.
> 23andMe will retain limited information related to your data deletion request, such as your email address and Account Deletion Request Identifier, as necessary to fulfill your request and for the establishment, exercise or defense of legal claims.
And that's why I don't use 23andMe, even though I'm quite interested in the product and was super tempted to buy. Just because it's not future-proof, and that's a deal breaker.
If you know of a DNA sequencing service that can do its job, send the result, then destroy every sample and every bit of information they had (save, possibly, for the payment receipt), please let me know. Don't care about ancestry, relatives and other social stuff, just the raw genetic data.
As someone who is excited about the potential of using DNA tech for new drug discovery, I’m really sad to see 23andme handle this so poorly. I’m seriously considering not doing their periodic health surveys given this incident + them putting updated health reports behind a paywall.
Luckily I’m also in the NIH All of Us program and they’re at least better about data safety (for now?).
This is like a home security provider blaming the victims of a break-in for not changing their locks every 6 months. Just take the L and own up to it, 23andme.
That’s not at all what it’s like. This is like using the same PIN for your work lock and home lock and then blaming the lock maker when thieves from work got into your house because they found your address in the stuff they stole from your work.
Its getting mad at the home security provider because you made a hundred copies of your house key, sprinkled them all over the town with your address on a tag, and someone managed to get into your house.
It's amusing how Google has shut down more services than Facebook, Microsoft, and Apple combined. Despite this, the controversial 23andme is somehow still operational. It seems like Sundar Pichai is doing everything in his power to dismantle Google.
23andMe then claims that poor password practices are responsible for this data leak.
> “Therefore, the incident was not a result of 23andMe’s alleged failure to maintain reasonable security measures”
I've not run security at an org of their size, nor have i touched their service, but i have to imagine there were some patterns to this breach that would have been reasonable to account for ahead of time. Did those 14k accounts also have their email provider accounts compromised? Could a login ip-range check have prevented all of this? 2FA seems like an obvious answer here but clearly that was more than could be expected.