Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like a good approximation would be to just turn Bayesian priors on or off (or whatever is their equivalent of "push this widely popular thing, even if it's not supported by this person's specific interests learned from their usage data").

My anecdote: I recently made a Twitter account specifically to follow some artists and share some art. There is nothing in what I wrote on Twitter, or in who I follow or tweets I liked or in fact my IP-based geolocation (I'm not from US) that would say to the Algorithm that I'm in any way interested in American politics. On the contrary, for a long time I tried clicking "not interested in this" for anything related to politics. And yet, the AI timeline in that account is regularly swamped by US political drama I'm not interested in, can't do anything about and generally just makes me miserable.

My only explanation is that these posts get pushed to me because they are overwhelmingly popular overall on Twitter, and the high prior swamps out the negative evidence specific to my account.

Of course, the other, more sinister explanation is that it's a (human) editorial decision from Twitter to actively push this kind of content to people, but I find the technical explanation easier to believe.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: