Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They used it to predict political affiliation of people that don't explicitly state a party preference.

The two parties already have a list of registered party members (and they can see who on Facebook explicitly states their party preference), for those members the main goal is higher turnout (they are the training data). The other voters they're interested in are unregistered (e.g. independent) voters that are likely to be on their side ideologically.

The core idea is very simple, they believe that if someone says they're independent, but their preferences/features (age, gender, location, likes, posts) are predicting moderate or high likelihood of $PARTY affiliation, then showing this person political ads may move them from 'maybe vote for $PARTY' category and get them in the 'definitely vote for $PARTY' category.

If you have continuous access to new Facebook data as you're serving ads, you can verify your ads are working on an individual basis by checking the predicted 'score' for $PARTY affiliation predicted by your model before and after an ad (I want to stress that this can be done on an _individual basis_). The likely sequence of events is that they did AB testing on different kinds of ads and found that fake inflammatory ads were most effective at achieving this goal in a very measurable way ($PARTY score), the resulting media/political atmosphere is collateral damage (hopefully unintended).

Source: I am a data scientist / machine learning scientist and this is how I would do it and how it seems to me others. I don't work on political data but I have worked on personalized recommendations which are similar.



Did the app really grant them continuous access to user info? I thought it was a one-off thing - they get your data at the time of use (and your friends') and that's it.

Plus the approach you outlined would require the user to like/dislike things based on the add they saw, so CA can observe a change in the predicted affiliation (they didn't have access to posts as far as I know). I don't think it would have that effect (even if the add influences you, I doubt that it would make you go unlike Obama's page for example). Not to mention that by any likelihood you shouldn't be able to verify that a particular add was shown to a given individual.

I suspect it was a simpler use case - they would group users into segments, and then craft different add strategies for each one (maybe based on other research or just expert opinion).


> I suspect it was a simpler use case - they would group users into segments, and then craft different add strategies for each one (maybe based on other research or just expert opinion).

It is in this last process that it is individual-based. It is in this last process that AB tests are done individually as a function of the specific strategy applied to him/her




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: