It's almost like, if they already know you're not a bot, they don't have to try very hard to re-prove it, or something.
Think of it in a Bayesian sense.
If 10% of anonymous users end up being bots (the prior), and the "hard" recaptcha has a 1% false-negative (incorrectly identifying someone as a human) rate, then of the anonymous users who succeed in getting past the recaptcha, .1% will be bots (the posterior).
But if 1% of sign-in users are bots (probably less than that), you only need a recaptcha with a 10% false-negative rate to achieve the same bot throughput limit. And, those users are less frustrated.
While Google is worried by the false negative, we as users measure frustration with the false positive (failures to identify an actual human) rate. Ideally they would find a system where both rates are independent or where false positive are rare.
I understand how it is justified technically, but that does not invalidate the fact that reCaptcha is discriminating against the users who care about privacy.
Think of it in a Bayesian sense.
If 10% of anonymous users end up being bots (the prior), and the "hard" recaptcha has a 1% false-negative (incorrectly identifying someone as a human) rate, then of the anonymous users who succeed in getting past the recaptcha, .1% will be bots (the posterior).
But if 1% of sign-in users are bots (probably less than that), you only need a recaptcha with a 10% false-negative rate to achieve the same bot throughput limit. And, those users are less frustrated.