They have a large set of different emails + passwords, and a large set of IPs.
Each IP can check a single set of credentials, so you never get a single IP in a short timeframe with too many login attempts, and never trying to brute force a single account. If the attacker rented time on the botnet for a long enough period, they can fly under the radar for quite a while. 23andme sees lots of failed logins, but no real way to pin it down.
reCAPTCHA would be the answer here. What's interesting/concerning is that it appears Google's reCAPTCHA (assuming 23andme was using it, and they should've been) was defeated.
Captcha still means you get to do the cred stuffing attack, just potentially more slowly which still doesn’t protect the user.
I think for sensitive data where you want to protect the user, it makes even more sense to just generate passwords for them. It’s even simpler than 2FA. Some online casinos do this.
If your attacker is stuck manually passing the captcha time after time, they're probably not going to bother.
The thing that worries me more is the possibility that newer AI tools are allowing attackers to beat reCAPTCHA with automation. If that's the case, a lot of folks are going to be caught with their pants down.
The linked post isn’t reCAPTCHA, it’s just some random bad CAPTCHA that’s been easy to defeat with OCR for ages. The real fundamental flaw is that human time is cheap enough: see Amazon Mechanical Turk. Many bulk, human-powered CAPTCHA-solving services have existed for years.