Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facial recognition can predict person’s political orientation with 72% accuracy (nature.com)
341 points by andreykocevski on March 5, 2021 | hide | past | favorite | 383 comments


There's an avalanche of people commenting on this who didn't bother to check the article before raising their methodological objections, so let's get these out of the way here.

- Yes, they controlled for objects appearing in the pictures that might indicate political affiliation. The images are tightly cropped around the face. See Methods.

- Yes, this is significantly better than both a coin flip and a human classifier. They gave the same test to humans, who did much worse than the model. See Abstract, Introduction, and Results.

- Yes, this is doing more than just detecting a person's race, age, and/or gender. The classifier is still accurate when they compare people with the same race, age, and gender. See Results.

If you want to discuss actual limitations in the study, here are some the author points out:

- "A more detailed picture could be obtained by exploring the links between political orientation and facial features extracted from images taken in a standardized setting while controlling for facial hair, grooming, facial expression, and head orientation."

- "Another factor affecting classification accuracy is the quality of the political orientation estimates. While the dichotomous representation used here (i.e., conservative vs. liberal) is widely used in the literature, it offers only a crude estimate of the complex interpersonal differences in ideology. Moreover, self-reported political labels suffer from the reference group effect: respondents’ tendency to assess their traits in the context of the salient comparison group."


Good points and summary. Adding on to that, the following was from the abstract about 2 sentences in. Seems like some users do enjoy jumping straight to the comments!

* Political orientation was correctly classified in 72% of liberal–conservative face pairs, remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%).

* Accuracy was similar across countries (the U.S., Canada, and the UK), environments (Facebook and dating websites), and when comparing faces across samples.

* Accuracy remained high (69%) even when controlling for age, gender, and ethnicity.


"controlling for ... ethnicity"

There's significant signal buried in here that is likely not controlled for, depending on how granular their controls are.

White-German and White-Italian are much more (10-13 percent) likely to be conservative leaning than White-Irish or White-British.

Hispanic-Cuban are more likely to be conservative leaning than Hispanic-Mexican.


Their section about demographics controls in the paper was really unclear (kinda surprised you can get this published being so vague).

What does a final number with the controls active even mean? I doubt accuracy was identical within each grouping they used. Even the groupings are really unclear in the paper (at most 4 ethnic groups,no idea how they did age, etc.)


Now, what would be interesting is to build a model that accounts for this "knowledge" and see if it can beat the out-of-the-box classifier :-) I'd assume it can, and the question is: how far can a bit of manual modeling bring us?


Maybe they asked the wrong questions, but I think that's the point of the questionnaire model.


Sure, but I thought was that building an ML model that considers the questionnaire results (or existing similar findings) would be a very interesting next step.


In the US or generally?


The words conservative and liberal have wildly other meanings outside of the US. Even in the US they are not well agreed upon (especially the definitions, people seem to know which one to pick though when they have to).


[flagged]


Please do not do ideological flamewar on HN. It's not what this site is for.

https://news.ycombinator.com/newsguidelines.html


> Responsibility: your job to feed and cloth yourself

Unless you are a farmer


Every single one of those "principles" listed is rhetorical, not factually based on policy.

I don't know how the GOP still manages to run on a platform of "balanced budgets" when they have consistently running higher deficits than the Democrats for nearly that past 100 years.


[flagged]


Please do not do ideological flamewar on HN. It's not what this site is for.

https://news.ycombinator.com/newsguidelines.html


I admit that I am impressed that the comment id of your reply to parent comment is smaller than the comment id of this comment. I didn't think you had it in you.


It took me a while to figure out what you meant by this, but I think(?) I get it now—in which case, I'm glad you noticed the attempt to be even-handed. When I post twin scoldings like that, though, what's significant is the pox on both flamewar houses. You shouldn't attach any significance to which of the two got posted first because that's entirely random.

I don't especially try to find opposing belligerents to moderate when a fight breaks out; that would just be a different form of bias. But it's handy when it comes up naturally, because it neutralizes the "you're only doing this because you disagree with me" objection which usually comes up, and which I'm always bracing myself for, otherwise.

I think this reaction must be biologically rooted somehow, because it's so common, and recognizably the same "no fair!" reaction that one can observe in small children and sometimes in animals. That's not a criticism, and certainly not a personal criticism—my point is that we all have this.


In the US


I suspect we're going to see the same thing we saw in the model that can detect sexual orientation with much better than chance odds. It was, apparently, detecting that gay men tend to take or select images of themselves for a dating profile using a different angle, better lighting, and possibly differences in things like hairstyle and beards.

Very anecdotally, I've noticed a possible weak correlation between certain kinds of beard styles and political leanings in men.


Obviously it's looking at something. I feel like everyone is jumping to assuming the authors are implying it's somehow in the person's facial structure, but it doesn't say that.

If it's looking at the quality of the photo, or the trim of the beard, that's still interesting. Among other things, it means that you know analyses like this might start cropping up everywhere you submit your photo (job application cover letter?), and also that humans might be doing this innately with photos and not even realizing it.


Humans might try, but the same study says they're not good at it with only 55 percent accuracy.


The 55% accuracy is a from a cite about a different study with a different set of photos and a different question.

https://www.researchgate.net/publication/232255935_Accuracy_...


Oh, that's really important. It's the only important comparison - I don't believe 72% means anything on its own. The study either needs to try the exact same methodology on humans, or to find a metastudy that consistently shows humans perform well below that on these tests.

To be fair to the authors, the information isn't hidden. It's prominent in the introduction, just not in the abstract.


Why does this need to be better than humans to be interesting?

It's still extremely interesting to me that a program can determine with over 70% accuracy a person's political orientation from a cropped photo, without taking age sex or race into consideration.

Among other things, it means it can do that to 10 million photos, which you'd have to pay a lot of humans to do if you wanted it done otherwise.


There are a lot of other cues. Oversimplifying, here in Argentina you can have a few clues from facial hair in men:

Beard like Che Guevara -> Left

Moustache like Saddam Hussein -> Right

There are exemptions, and most people shave all facial hair, so it's more complicated, but there are many easy clues in the face. Are the easy clues enough to explain a ~70% or they did something really interesting?


Yet Alberto, the current leftwing president has a moustache like Saddam Hussein.


Because comparing against a "competent human" is enough of a control group to be sure that it's finding interesting signal in the data and not just exploiting something trivial like race or political clothing.

It doesn't need to be better than humans to be interesting. Coming pretty close would do. But if you told me humans score 90% on the same pictures, I'd believe there's something unrepresentative about the sample pictures, because I don't expect humans to have that accuracy.


Unfortunately people dont have to be good at something to advocate it or force it on others. Just look at all the 'detoxes, random diets, cultural remedies like trying to cure plague with wisky, bloodletting etc.

Our deductive powers havent improved much, they just moved to areas of our lives that are more nebulous. Fundamentally bloodletting and 'looks like a hippie, and hippies shouldn't work for a bank" are about on the same footing


question would be if a trained human is better than a machine. AlphaGo was deemed stronger than humans only once it beat the best Go player.


This must be how god feels.


People are desperately trying to tell you who they are with everything that they do.


Perhaps it’s checking if the person took a selfie in a car with sunglasses on.


Got into Art a few years ago. Took classes and everything.

People like when I take photos now because my come out better. I simply got a lot better at understanding lighting and angles.


This is all discussed in the paper. The answer is that liberals are more likely to show surprise, less likely to show disgust and faced the camera directly. Much less suggestive was the wearing of beards and glasses where were very nearly in the noise.


Please consider that "conservatism" in 2021 isn't just a political leaning - it requires one to believe a very great number of hateful beliefs that are provably false, and therefore to be constantly disgusted and angry.

Is it so unreasonable that these feelings of disgust and anger are simply readable by machines on people's faces?


Please don't take HN threads into political or ideological flamewar. The core value of this site is curiosity and flamewar is the single greatest poison to it.

https://news.ycombinator.com/newsguidelines.html


> - Yes, this is significantly better than both a coin flip and a human classifier. They gave the same test to humans, who did much worse than the model. See Abstract, Introduction, and Results.

Where do they say that they gave the same test for to humans? All I find is the reference [15] that points to https://www.researchgate.net/publication/232255935_Accuracy_... that cite a previous article about a different set of photos and a different question.


They very much didn't. Downvoted the GP for taking the snarky tone about "an avalanche of people commenting on this who didn't bother to check the article before raising their methodological objections", but then mis-characterizing the methodology himself.


And some humans may be very good classifiers, or would be if trained.


The 72% number is the result when not controlling for demographics. When controlling for demographics, the results ranged from 65% to 71% accuracy.

https://www.nature.com/articles/s41598-020-79310-1/figures/2

It makes we wonder what the accuracy would be if they controlled for demographics at a smaller granularity, like sub-ethnicities.

Furthermore, it appears that the Canadian dating site data set was 54% conservative, so an algorithm that always guessed conservative would be correct 54% of the time. From the article I can't tell what the balance was for the other data sets.


"The accuracy is expressed as AUC, or a fraction of correct guesses when distinguishing between all possible pairs of faces—one conservative and one liberal." - so no way to guess better than 50%.


If they are distinguishing between pairs with different categories then they just need one of the paid to be easily categorised to know the category of the other. How do they control for this?


Ah, yes. I missed that.


Also: "Overall, the average out-of-sample accuracy was 68%..." (again, this does not control for demographics).

I would not call it "prediction" when the "predictor" was trained on the data set you're testing it against. The out-of-sample number would be more fairly called prediction. It's unclear what the number would be for out-of-sample accuracy corrected for demographics, but we could extrapolate a guess of 64%.


I think people automatically jump to the conclusion that studies like this are linking the development of innate physical characteristics with political allegiance.

I can understand why this line of enquiry is troubling; as it obviously ties in with branches of science in the 20th century that were immoral.

Logically, some physiology will affect our reasoning from a pretty deferred position. Our genetics predespose our brain chemistry. I'm led to believe the development of certain hormones has been shown to have an affect on our physical features; for instance, increased testosterone, providing a more prominent browline. It doesn't feel proposterous that such physiology might have _some_ bearing on the way we align our worldview.

It also doesn't surprise me that these factors might be able to be used to infer correlation when used with a very large dataset.

People are nuanced though, and I struggle with the concept that we are total slaves to bodies we're born with. I believe choice (through nurture) allows us a high degree of freedom to counterbalance the initial physiological stack our genetics encourages.

If this is true, how is this model's reasoning able to successfully predict political allegiance?

I'm imagining the way we present ourselves provides subtle nods to prominent figures we respect and cues towards our politics. We leak information through body language, dress, expression. Is this where the extra inference comes from?


> What would an algorithm’s accuracy be when distinguishing between faces of people of the same age, gender, and ethnicity? To answer this question, classification accuracies were recomputed using only face pairs of the same age, gender, and ethnicity

Perhaps I am reading this wrong, but this is not controlling for demographics. If, for example, 70% of older white men are conservative, you could reach 70% accuracy by predicting all older white men are conservative.

Yes, it's much better than a coin flip, but random 50/50 guessing would be a ver bad strategy.

It's really a shame that they did not have humans and their system classify the same out-of-sample data so we could compare apples to apples. Perhaps they will in later research.


There's evidence of inheritability of political orientation, since it is tied to personality traits. Given that, it makes sense that it is possible to predict political orientation through looks alone. After all, looks are also inherited traits.

The article gets into this a little bit. They are able to predict traits like intelligence and honesty by looks alone.


That reasoning sounds like a logical fallacy:

1. Personality traits are inherited 2. Personality traits influence political views 3. Looks are inherited -> Therefore, looks influence political views (?)

"A is X and does Y" "B is also X, so B does Y as well."


I think the statement is that looks and political views are both observable variables that are dependent the same latent variable (genetics). And that through this link, you may be able to (at least partially) infer one observable variable when given the other.


Personality is only weakly tied to political view according to the above comment... so... ?


I think this is a very dangerous line of study. I can easily see these results getting used to forward racism and other appearance based discrimination.


> Equality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group.

- Steve Pinker


Truth doesn't really care about your sensibilities. It seems there is actually an art to finding the mind's construction in the face!

What this is really telling us is that we should not be judging others on their political, sexual, ideological, whatever-ical affiliations. A person is a person and we should value them for that and only that. These other qualities may be more or less useful in some sense, but not in the sense that counts - that they are a thinking feeling person who deserves our respect.


> Truth doesn't really care about your sensibilities.

I think the GP was questioning the wisdom of seeking a detail answer on this question; not whether or not the question has an objectively true answer.


Do you think people hundreds of years ago with different common political views - some of which we'd find abhorrent now - had different faces too?

Someone should test this classifier on old images. Maybe it's just identifying fashion, not facial structure.


A lot has changed, including nutrition. I would not be surprised if 17th century skeletons and body builds were observably different from the contemporary ones. In fact, at least height and weight of an average person is fairly different compared to the past.


And those are all non-genetic factors along the lines of other non-controversial things like "how wealthy you are will influence your political views," versus things supporting the "all political views are sacrosanct and can't be judged, because people can't help themselves" direction that 'plutonorm was suggesting.


No, what would change is that you'd get people from hundreds of years ago expressing conservatism or liberalism _for the time_. It's neither fashion nor facial structure, it's more or less intensity of facial scrunch vs. innocence, or wariness vs invitation. This is going to hold true for all humans and indeed for similar enough animals so long as they have the ability for comparable postures and expressions (dogs and cats would have the capacity for posture but I think dogs are more capable of brow expression/mobility, and of course monkeys and apes are close parallels to human expression)

So you could, in a limited sense, tell whether you've got a hippie cat or one who wants the hippies to get off its litterbox :) the latter will give you more side-eye, and more of a narrowed gaze.


I honestly don't see the danger here. This is an incremental addition to an already very large body of research that's existed for a while.

And racism isn't rational. You can't combat racism by suppressing research. If a study came out demonstrating that Alpha Race is smarter than Beta Race, and that Beta Races is smarter than Gamma Race. The racist Betas might use that study to justify their hate for Gammas, but they'd find an entire different reason to hate Alphas.


Traditionally, by saying they have the wrong personalities.


I really hope not. 72% is impressive. But, when you're thinking of making any real world decisions based on it, 72% shouldn't really be considered much better than chance.


You shouldn't make any decisions on it, but it's striking that it works at all. It would be interesting to know why.

I have my suspicions about that, which are that people tend to mimic those they see around them. Which could well extend to how they hold their faces unconsciously. It would be not unlike the way we develop similar accents to those around us, just with different muscles. But that's a hypothesis that would have to be tested.

Sadly, if it holds up, it almost certainly would lead to people making real world decisions.


I think people are already making those real world decisions, with or without being able to automate the process. The real implication here is that political ideology owes more to raw emotional biases than it does to analysis and introspection. It echoes a person's general attitude toward life in the absence of specifics. I would say this is not a revolutionary observation, but it's a confirmation through experimental means.


This is Hitlers head/nose measuring all over again....


I would be very interested if instead if they first passed it through an indepedently-trained emotion classifier, and then look to see if the emotion vectors alone can predict liberal vs. conservative to some degree.


Seems like a lot of medical/living history can be derived from the face. Rings around the eyes tell of 2nd jobs and long hours. Sunburned skin tells about work outside exposed to UV. Bad shaves tell about a lack of time and money, lack of makeup, tells a story about a lack of time for luxurys. Substance abuse skin tells of hard times.

All in all, this sounds like a nice tool to detect anglo-sphere wide signs of a middle class that lost out to globalization. Which in itself makes it valuable to judge credit-history by look.


IS it a meat head / not meat head detector?


I'm surprised they didn't classify and weight for facial features such as general symmetry or eye distance.


avalanche of people commenting on this who didn't bother to check the article before raising their methodological objections

This is a norm nowadays, and a bad one. In some cases it's carelessness or laziness (why go to the effort of reading the article when posting an uninformed contradiction will provide free explanations), in others trolling or deliberate propagation of misinformation.


You sir/madame, are a saint!


So even filtering for the extremely limited number of things that you allow us to find problematic, by their own admission : - they know that it's actually probably "working" (lol 72%) because of other biases they didn't take into account - Ultimately it puts people into 2 bins which are both huge and disparate and reduce complexe multi-dimensional elements to a meaningless binary.

cool cool cool.


I don't really understand this comment. You don't think it's surprising or interesting that it could predict with 70% accuracy the political orientation of two different older white males? Or two younger non-white females?

I found this to be a surprising result.

What the mechanism? Is it the haircut? The facial expression? The quality of the photo? Maybe. Do those count as "biases that they don't take into account?" Could be. Would it be interesting to learn what those "biases" are? Yes!


Surely the ability to sort people into meaningless bins on facial features alone is worthy of note?

Let's say you created an arbitrary classification ("number of letters in street address" or "cosine of age in minutes") and an alorithm could predict that based on a photo alone. That would surely indicate that the classification wasn't arbitrary and there was something more interesting happening?

At the very least you'd want to dig deeper.


This is statistically significant. The implications are massive. The authors are signalling it might be worthwhile to take a look at the exponential growth of these technologies.

72% is much better than:

* random chance (50%)

* human accuracy (55%)

* 100-item personality questionnaire (66%)


Scoffing at 22% over random? Lets play poker some time.


>Ultimately it puts people into 2 bins which are both huge and disparate and reduce complexe multi-dimensional elements to a meaningless binary

Binary, yes. Meaningless, no. Having multi-dimensional information is better than binary information, but having binary information is better than no information (or guessing). Telling me the temperature, humidity, wind, cloud cover, and precipitation outside gives me more information than just telling me it is hot or cold outside. But knowing if it is hot or cold outside is still far better than not knowing anything. You can still make decisions and take actions based on limited information, which you could not do with no information.


> lol 72%

It takes 33 bits to single out a human. 72% means a substantial fraction of a bit -- it's not much, but combine it with a few dozen other clues of similar magnitude and you're really getting somewhere.


This isn't a study that shows that people's faces indicate their political leanings.

It's a study which shows that pictures that people select to represent themselves publicly have features that indicate political leaning.


Exactly. There was some fuss a while back about a similar classifier for sexuality. It turned out to be guessing mostly based on head tilt, personal hygiene and whether the person was wearing glasses. The physiognomy component was ~nonexistent even though it was publicized as though it weren’t. People intentionally if at times subconsciously present themselves in a way that signals information to kindred spirits. You’d need to bring in hundreds of people, wash them and basically take mugshots to control for that.


I think that conclusion is at least as interesting as physiognomy. It's remarkable that a computer could be more sensitive to it than people.


People don't get confirmation of those details normally unlike the ML algorithm. Of course we don't lock people in boxes with a stack of training set photos for a period of time equivalent to ML training.


Agree. Even the few pixels bordering the face in the sample image can show she's outside. She chose a smiling picture, she's wearing makeup, etc...


That's a really good observation to note. The prior embedded in their image data is their own bias of what is a "good" representation of themselves.


>images were tightly cropped around the face and resized to 224 × 224 pixels


One can only wonder what the result would be by using the equivalent of government ID photos (neutral expression, no smile or make-up, solid backgrounds).


Yep, I'd look very closely for training data bias.


Profile photo with a person wearing a baseball cap and Oakleys? Yup, that’s a republican.


If it's that easy then why was human guessing only 55 percent accurate?


Worth noting the human guessing was not on the same data set, but I believe the machines are going to beat us at this in general.


At least partly due to lack of feedback on accuracy. I don’t know about you but I don’t necessarily ask everyone I meet their political leanings, so it’s hard to train yourself other than through stereotype.


My guess would be sampling bias. Most people base their model on small geographically restricted samples, and are heavily biased by media. It is possible that some people can repeatedly perform better than 55%.


I think the key line is how much better this system is than humans attempting the same task:

> Political orientation was correctly classified in 72% of liberal–conservative face pairs, remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%).

This isn't a matter of "recognize that old white people are conservative", because people will do that already, and they know all those biases. And the system doesn't lose much accuracy when comparing otherwise similar people.

This system is picking up on things we don't notice. Maybe it's the photos themselves (they are self-selected), maybe it's micro expressions in the face, maybe it's something else entirely.

But damn it's neat and maybe frightening. Imagine if your next hiring manager had a quiet little camera in the corner and chose you based on your predicted approval of unions.


As noted by gus_massa in another thread, the 55% figure for human accuracy is from a different study with a different datasets, so it's questionable how comparable the results are.


But damn it's neat and maybe frightening. Imagine if your next hiring manager had a quiet little camera in the corner and chose you based on your predicted approval of unions.

This is the dystopian vision of the future I come to HN for. Bravo!

(It's also why I do ML, since I'll be on the front lines to notice if something like that is being deployed. Or at least somewhat more likely.)


> This is the dystopian vision of the future I come to HN for

It's what I'm here for. Fascinating technology- how might I ruin the world with it?


Sorry for my platitude but ML is only a tool. Nefarious usage gets counter-balnced by e.g. identifying tumours.


It doesn't balance out; nefarious usage is going to cause many, many tiny harms that are harder to detect, and many larger harms that will be excused, up to and including denying that beneficial use of indentifying tumors, because algorithms are going to be presented as beyond reproach.

Beyond detecting union affinity, maybe someone will also makes its ML twin that predicts how much they can underpay someone.


I'm sure that's exactly what the tech giants will do in their feudal-governed cities they are going to carve out of Nevada.


> This system is picking up on things we don't notice.

In the case of men at least some studies have found a correlation between testosterone level and political orientation, something that matches well with my anecdotal observations. It is not outlandish to think this manifests in visual cues in even something as limited as a pic.

https://www.mdcthereporter.com/low-testosterone-left/


I have no idea why this comment got downvoted. This is the most scientific explanation possible. In males, more testosterone induces more risk taking, while low testosterone is linked to risk avoidance. Left-wing policies (“nanny state”) cater to people who avoid risks; people who are risk takers prefer right wing policies. Testosterone also affects muscle growth and facial characteristics.


I've learned some things about testosterone 'cos it's interesting. You're slanting it.

People with more testosterone are more unsatisfied, aggressive, sexually driven, restless, and disappointed. This has nothing to do with something as apparently laudable as risk-taking vs. risk-avoiding, and in fact it's quite easy to be loaded with testosterone and yet constantly fretting over enemies and bad stuff you expect to happen, leading to conservative choices (not risky or experimental choices).

Aggressiveness and dissatisfaction are a better match for testosterone, which MAY lead to risk taking but are just as likely to lead to efforts to control and suppress perceived risks.

There's merit in the effects of testosterone but you're off base in terms of what you think it does.


Aggressiveness is only indirectly related to testosterone. Testosterone makes a person defend status. Researchers have created situations where status is determined by generosity, and they found that in these situations the people with more testosterone were more generous.

Aggressiveness is only going to have that association with testosterone when aggressiveness seems like the best strategy to maintain status.


Economic leftism might be associated with less risk-taking, but why do startup founders - people who often take huge risks - tend to skew socially left? It could probably be partly explained by sociological reasons (might be harder to get support/funding if you're perceived as more socially conservative than average), but even accounting for that, I suspect it's mostly correlated with sincere political belief.

That's one problem with studies like this, as the authors point out. Is Paul Graham liberal or conservative in this binary? I think most conservatives in the US would say liberal, but many liberals, and perhaps even most liberals in his general sphere, might say he's conservative due to "complaining about SJWs and the intolerance of the left on Twitter almost daily, and routinely and proactively arguing in favor of lowering taxes for corporations and very wealthy people".

(The obvious answer is he's neither [http://www.paulgraham.com/mod.html] and it's too hard to fit many people into a one-word binary.)

One could also claim conservatives are by definition averse to change and so risk-averse. They (in this contrived weakman argument) are more likely to want to avoid risks from immigration, the demographics of their community shifting, their industry changing and the possibility of having to find a new kind of job, newly developed vaccines, theological consequences of permitting sinful behavior, etc.

These are all intellectually lazy arguments, but the point is I don't think one can reliably sum up "a conservative" or "a liberal" - whatever that is - as being more or less risk-taking.


>why do startup founders - people who often take huge risks - tend to skew socially left?

People high in the big-five personality trait of openness perform well in startups (and perform relatively poorly in large organizations with an established business model) and high openness people tend to leftist political opinions.


Startup founders tend to skew socially left the same way supermodels tend to say being attractive is not that important


Probably being penalized for linking to an overtly snarky opinion article rather than a scholarly source. Jack-in-the-box links tend not to fare well on HN.


What makes you think your hiring manager isn't already doing that based on their interaction with you and your looks? Even simply subconsciously?


The point is this system would perform better than them. Maybe we will need to learn how to lie to robots?


On the other hand their mental coin flip is going to be documented nowhere, whereas the use of ML in their decision could be traceable and they could get sued for it.


You haven't already? I thought we had plenty of practice most teenagers do so until turn eighteen, many answer recruiting personality surveys with what they think the employer wants, etc.

Heck one of the middle school career day quiz suggestions recommended looking at what is listed with and without the plan on going to college option check.


I would like to see more comparisons against the best humans at a task


> This isn't a matter of "recognize that old white people are conservative", because people will do that already

I think your giving too much credit to humans, and ignoring the fact that most people would lose a lot of point due to a bias towards expectation that people who are attractive or look well off or happy would share their own political beliefs.


Presumably some humans are terrible at this and others are as good or better than the machine. Of course, even someone who was very good (say 80% accurate) would be wrong about 1 person in 5.

Being honest about this equips a person with a useful hunch capability that will provide a long-term edge (perhaps as a detective, or in sales, or any of many other contexts where people-reading can help); less honest practitioners might become con artists, or consultants/coaches whose methods are not reproducible or even formalized such as Dave Grossman: https://www.insider.com/bulletproof-dave-grossman-police-tra...


> I think the key line is how much better this system is than humans attempting the same task:

My key issue is: why does that matter, humans should not be doing it in the first place, why do we need a machine that's even better at it?


This should be impressive considering that humans are very good at picking other human faces.


"The dating website sample was provided by a popular dating website in 2017. It contains profile images uploaded by 977,777 users; their location (country); and self-reported political orientation, gender, and age."

It doesn't sound like people were consenting/aware that they'll endup on a facial recognition study.

But I'm not even surprised dating website would sell their such data


It's likely that users did consent when they checked the "I've read the terms and conditions" checkbox when signing up.


It's possible that the dating website got 1 million users and couldn't find product market fit. They then pivoted to selling their users data.


There are plenty of scraped dating website datasets lying around.


Yes, but I suppose you can't use them in a publication on Nature if they are scrapped illegally.

This dataset directly from the dating website


Ha. Hahaha. I wish. I'm sorry to laugh, but a ton of ML papers are based on illegally-scraped datasets of one form or another, unless they use strictly blessed datasets (Imagenet2012 being the gold standard mostly-useless-in-the-real-world dataset).

OpenAI's Jukebox is based on illegal large-scale gathering of copyrighted material, for example.


What does "scrapped illegally" mean?

I've never encountered this term. I can see how scrapping might be a violation of some websites terms of use, but I've never seen "scrapped illegally" used. Do you have any examples?


- I have personal information on linkedin

- I have agreed with LinkedIn that they may use my personal information for a set of well-defined uses (basically things on the LinkedIn website/service, and some 3rd party services they use to run the website/service).

- LinkedIn promise that they will not share my identifiable personal information with 3rd parties for any use

- LinkedIn's terms of use state that nobody may scrape personal information from their website without their consent. This is how they enforce the previous promise to me

- Some business comes along and scrapes my personal information for their own business use.

- That business knows that LinkedIn prohibit this, and they know that I have only consented for my personal information to be used for LinkedIn itself.

- This is probably "unlawful" (as they're interfering in my contract with LinkedIn), and certainly violating my GDPR rights. Sadly, it's hard to point at a specific example as guidance doesn't have a section titled "Can I ignore individual's explicit opting out of my usage?".

Hence, illegal scraping, as willing violating the GDPR is illegal.

Just to head-off the very common response: Personal, individual, use is not covered by the GDPR. So there is nothing wrong with you going and using my LinkedIn data for any personal reasons. The moment you try to use it for business purposes though, that's illegal.


Just like the US isn't the entire world, neither is EU.


Well, pictures of faces could be considered personal data per GDPR. Scraping that data without each person's approval could be illegal regardless of any terms of use.


1) Web scraping is not "illegal"

2) I haven't the slightest clue why you think scraped data can't be used in a publication - some results from first page of Google Scholar:

https://www.sciencedirect.com/science/article/pii/S187802961...

https://journals.sagepub.com/doi/abs/10.1177/004209802091819...


> Their facial images were obtained from their profiles on Facebook or a popular dating website

So, this seems to be doing profile picture recognition, not facial recognition. It's not like they're saying there are physical features in your face that give away your political affiliation - that is to say, putting two people's bodies under the same photographic conditions would probably not create this kind of signal.

What this is probably training on is the cues for cultural values that we self-select in our most deliberately promoted images of ourselves. If you have a carefully-chosen profile picture, it probably includes signals of what's important to you, especially if you're using it to attract people with similar values.


This meme[0] comes to mind. Is this AI picking up on the same types of cues represented below?

[0] https://imgur.com/FK0RzKM



Thank you. That’s a perfect “opposite” collage


The paper says predicting based on sunglasses use is only 52% accurate, so probably not.


> images were tightly cropped around the face and resized to 224 × 224 pixels


still leaves the micro- and macro-expressions, small grooming cues (makeup, no makeup, eyebrows trimmed or not), hairline, head angle vs. camera, lighting etc. These are all things that humans very specifically deploy to define themselves and their grouping, and communicate with others. So I am guessing a whole universe of personal yes-no qualities, political and otherwise, are encoded there, quite intentionally.


I'm sure you're right. But it makes no difference, because when the facial recognition is applied, I think the likelyhood that people will have the same "grooming cues" as they do in their profile pictures is pretty high. If I have short hair in my profile pic, I probably have short hair in my everyday life, in the moment that my face is recorded and processed, as well. Yes, I choose how my face looks myself. But that does not mean anything, as objectively it is the same, whether I intended to or not.


At some point in grade school I was home sick for a few days and ended up watching C-SPAN for several hours on end. I don't recall exactly what was going on politically during that time, but many congresspersons were standing up and giving speeches for a few minutes at a time.

I eventually started a game in my mind where I'd try to guess their political affiliation before the chyron appeared. I'm pretty sure by the end I was getting it correct more than 50% of the time. Everyone was dressed similarly, but I remember looking closely at their tie patterns and hair cuts as clues.


Upvoted for your use of chyron


I only skimmed the paper, so I'm not claiming to know much about it, but one thing to keep in mind here is that a fair coin has a 50% accuracy using the same terminology as the headline. I'm not saying 72% is not an interesting achievement, its just that "you can do about 50% better than random chance" describes my gut feeling about how much you could actually see in someones face.


They do note the random chance bit, and they also note that it's better than humans could judge on their own and even, surprisingly, better than judged by a personality questionnaire.

> Political orientation was correctly classified in 72% of liberal–conservative face pairs, remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%).


Were the humans experts or just random people though?

The real question is whether the tool can beat a lookup table of age, race, and gender probabilities. The tool isn't going to be winning points of phrenology here. Weight, hair color, and hairstyle would also likely tell you a lot.

I don't have any particular reason to believe this tool wouldn't work, but let's not pretend it's getting their by phrenology-esque topologies of people's faces.

A randomly chosen black individual in the united states has a > 72% chance of leaning democrat. A randomly chosen hispanic individual is ~55-65% chance of leaning democrat. I don't find it crazy to imagine they've got a few other smaller features to boost it.


It further notes:

> Accuracy remained high (69%) even when controlling for age, gender, and ethnicity.


Did it? If you control for gender but not sex, you can use the difference to predict ideology. And for ethnicity, there are subethnicities that matter too - white Italian and white German have different proclivities.


How does one calculate a metric like that?


Simplistically, let’s take the above statistic “A randomly chosen black individual in the united states has a 72% chance of leaning democrat” at face value. So, a coin flip would be lower than 50-50 because someone of that race in that country does not have a 50 50 chance. So you would adjust the chance to 72-28 and compare that to the Facial recognition results. If you find that the results are the same, then you know that the Facial Recognition not picking up on anything beyond race. If the results are different, you know the FR is picking up on something in addition to race.

Really it is more complex than that, but fundamentally you try to say “how accurate can we be using just age, gender, and ethnicity” and use that as your controlled benchmark.


I understand what they're implying by "adjusted accuracy". My point is that I'm not sure that metric really makes sense, because "accuracy" isn't a particularly useful metric to begin with. It depends entirely on the sample distribution. "Always guess not fraud" will be 99.9% "accurate" for most use cases.

I'm asking what the literal metric is.

edit: and I don't think your explanation really works for accuracy, because accuracy isn't a relative measure, like, say, R2.


They explain: they tested predictions on pairs of faces of teh same gender, ethnicity and age. The result was 69% instead of 72% apparently.


Hmm, the actual phrasing is:

> The accuracy is expressed as AUC, or a fraction of correct guesses when distinguishing between all possible pairs of faces—one conservative and one liberal.

I've never seen something like this. Maybe this is a normal procedure?

But I would be worried that the number of old black conservative women would be really small. Seems a bit sketchy


By performing the analysis within each of those subgroups.


`f(x) = return 'Liberal'` will get you a great accuracy running the analysis within a subset of black women.


It says in the article that humans got just 55% (so 10% better than random chance) on the same test.


I wonder if that 55% is from mturk or other survey sites that can be somewhat questionable in terms of quality with how much people are paying attention versus maximizing their hourly survey earnings.


It says on a similar test - it's a reference to a different study with a different data set.


The humans are probably overthinking it. You get ~55% by assuming by answering "Biden" for everybody.


In fact, the dating site dataset was ~54% conservative according to their explanations of included data, but the point stands.


According to Wikipedia only 51.3% voted for Biden/Harris.


The dataset was not restricted to voters.


Do you have data that includes non-voters? I haven't seen any; most polls are limited to voters.


There were tons of national polls done for Trump's approval that included all adults (instead of likely or registered voters). Trump fared noticeably worse in the polls of all adults throughout his presidency.


Approval isn't the same thing as preference among two choices. Trump received a considerably higher percentage of votes than his approval rating.


72% is a very significant deviation from 50% though, I wasn't expecting such a result.

>The highest predictive power was afforded by head orientation (58%), followed by emotional expression (57%). Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust.

Emotional expression makes some sense in hindsight but I wouldn't have though that head orientation would correlate. It's interesting to know how we betray ourselves with these minute details of body language.


Though, these seems a bit odd; how many people are expressing _disgust_ in dating profiles?


Those categorizations are opposites on a continuum of various kinds of muscle tension, notably brow scrunch muscles. Surprise is literally brow up and jaw slack, disgust involves brow scrunch and lip tightening. Facial expressions also bleed through to our experienced emotions, so going around with face scrunched a lot will MAKE you more suspicious and disgusted with things.


This is an important point - 72% is interesting, but its a 22% added to the chance to guess correctly... still interesting though


A 72% score on this scale is equivalent to confidently knowing 44% of the answers, and coin-flipping the rest.

It's doing something that untrained humans are not capable of [edited to add: although humans were apparently tested by a different method, so this is not properly comparable], but is still a failing grade by usual methods of assessing human knowledge.


> the relative universality of the conservative–liberal spectrum

Grey tribe members may beg to differ

https://www.gsb.stanford.edu/insights/rise-liberaltarian


Anyone not living in a two-party system begs to differ, especially those living in countries where liberals are conservatives (in a sense the term conservative is used within the US).


Not really. Other country's liberals are the USA's Libertarians. They're much different than neocons.


Can we run the model in reverse (eg. deepdream) to see what the stereotypical liberal/conservative looks like?


The main classifier they use is using logistic regression, so no. They do mention that a deep nn had similar performance, but even if you deep-dreamed it you wouldn't get images, you would get feature sets, since they first run the photos through a feature extraction model (i.e. the input to the nn is not raw pixels).


Oh man, a whole new generation will get thisliberaldoesnotexist.com, thisconservativesdoesnotexist.com, thisfascistdoesnotexist.com ..

On a serious note, sounds like the dating and hiring sites are going to need to hire an all new layer of ethicists and legal teams to defend against algorithmic discrimination.


I want a facial recognition tool that tells me what information I am leaking. We need a haveibeenpwned for facial recognition.


Why play defense when you can play offense?


how about this tool in realtime, a kind of biofeedback mechanism that trains you to look conservative, liberal, sexy, dangerous, harmless etc.?


That could definitely be productized, even if it didn't work very well.


We have that, it's called acting class.


I question the dataset used for this and the basis. They used "self-reported political orientation, age, and gender" and "facial images (one per person) were obtained from their profiles on Facebook or a popular dating website". I first question the ethics of what sounds like a Facebook scrape. Second, I wonder how well they normalized across the terrible filters and frames and variations in pose. Finally, I question the basis in regards to the ability to gauge one's "openness to experience" let alone, say, "opinion on immigration policy" from micro-expressions in the face (or at least those which could be interpreted by a VGG based facial recognition algorithm).

Edit: One last rant about this palm-reading-esque pseudoscience: I hate that this was put out in the universe, and thus potentially giving the wrong person ideas.


I assume there's no way to actually verify how someone may choose to vote, assuming there's no record of that?

I think there's huge value now that everything is being sent into a "machine" or "the algorithm" in fucking with it.

Order sex toys from Amazon, show them you're into outrageous books and fool them into creating a fake profile of "you", based on your spending, browsing and other data you generate.

I'd love to ask a machine what it knows about me, how accurate it is, and then switch it all up. I'm too old to vote now (and will probably be dead soon) but I'd love to pick a position completely unexpected just to throw it off.

Poison the well


From Amazon's perspective you have not poisoned the well... you are someone who is likely to buy those things. Amazon does not care if you really like or even use them, it only cares if you buy them.


The machine will know which sort of person tries to fool it in the way that you are trying to fool it.


> I'm too old to vote now

There are places with a maximum voting age?


Presumably they're a cardinal. Cardinals aged over 80 aren't allowed vote for pope.


This is the only reasonable explanation.


Evil plan: release this model as an app, heavily branded to skew usage towards my political enemies, and suggest it to be used to find "<opposite side> infiltrators in your midst".

Accuracy issues are a feature not a bug! Now you've sown a bunch of discontent and suspicion within their communities.


I find this fascinating, with no intention to start a political battle, how might somebody’s biology influence their political leanings?



Thank you!


It’s pretty well established that age, gender, and skin color are correlated with one’s voting habits. Here are some stats: https://www.pewresearch.org/fact-tank/2020/10/26/what-the-20...


From the article:

> Accuracy remained high (69%) even when controlling for age, gender, and ethnicity.


I'm not sure this is enough. If you classify all old white men as Republican you can have 69% accuracy. I'd even argue the more such factors you control the less it means something.


I think you misunderstand what "controlled for" means in this case. The way the controlled version of the test works is that two pictures are presented at the same time. Both subjects will be of the same gender, the same ethnicity, and (approximately) the same age. The goal of the test is to choose which of the subjects pictured is conservative, and which is not. There is no way to obtain greater than 50% accuracy by choosing "old white men", because both pictures will be of "old white men" (or "young black women", or whatever). Something else in the photo is being used to obtain the boost in accuracy: perhaps facial hair, perhaps obesity, perhaps apparent youthfulness, perhaps a guess at sub-ethnicity, perhaps pose---but it has to be something other than "old white man".


Given the US 2-party system wouldn't accuracy be 50% if you just labelled everyone "Democrat". 69% doesn't sound "high" in that context.



Skin color. Features resulting from inbreeding. Features due to your gender. Features resulting from gender change.


I don’t understand what you mean here, can you elaborate?


Inbreeding?


I'm not sure where GP is going with this, but for example the Habsburger lower lip [1] is a easily identifiable feature from inbreeding in Western European nobles. If you can identify it it gives you a good clue about the socioeconomic class of the person, which gives you a good clue about their political orientation.

I'm sure many more subtle examples also exist.

1: https://de.wikipedia.org/wiki/Habsburger_Unterlippe


Is it common to find regular people with this feature in normal life? Other than nobles.


Yeah, it’s not a whole sentence so I’m having difficulty grasping the meaning.


I wonder if the study accurately reflects the lack of political common ground in the population. Are only 28% of us “moderates”?

It’d be interesting to do the same analysis with photos of people taken 10, 20, 30, 40,... 100 years ago to see if the intersection (28%) grows/shrinks historically. Were we always so easily politically separable by our appearance alone?

Has our appearance always revealed our political leanings, or do we now dress in order to make a political statement?


There's no definitive definition of "moderate".

People who want to argue most people are moderate say the political middle is the 75% middle of the bell curve, while people who want to argue most people are partisan say the political middle is only the middle 20% of the bell curve.

What is accurate to say is that political views in Congress have bifurcated (not a bell curve), and that people in America have "sorted" -- decades ago many conservatives were Democrats and many liberals were Republicans, but now Republicans are virtually all conservative and Democrats are virtually all liberal. And people's political identities have become more important to them.

But it also continues to be accurate that when people are surveyed according to their actual political positions on issues (as opposed to party affiliation), political views of citizens are still strongly bell-shaped -- they cluster in the middle. (Contrasted with congresspeople who cluster at two peaks.)


> of the bell curve

There is no bell curve. Political orientation is largely a self-referencing phenomenon. Outside the political elite, who have a high degree of ideological coherence, knowing a person's views on one issue is a loose predictor on their views on another. Gun-toting anti-abortion lesbians and pro-immigration free-market feminists are real, and they aren't some striking minority.


There's a bell curve on each issue independently. This has been well established by surveys.

And therefore however you want to aggregate issues into an axis, you'll invariably find the political distribution to be normal.

It doesn't have anything to do with how coherent or not you find each party's platform to be.

So yes, it is fair to say that political views are bell curve-shaped. It's not an artifact of political parties.


> There's a bell curve on each issue independently. This has been well established by surveys.

I'd be curious to see the data. Measuring agreement is challenging. (Strongly Agree versus Agree mean different things to different people.) And polarizing issues tend to be multi-modal, thereby defying normal characterization.

Where one can create an objective x-axis, e.g. with increasing strength of regulation on a topic, one frequently sees non-normal preference patterns.


The political scientist Morris Fiorina is who you want to start with. His book "Culture War? The Myth of a Polarized America" provides a great overview with lots of data.

A central finding is precisely that people appear polarized when you measure badly, but when you really delve into their nuanced beliefs with better surveys and more accurate questions, the bell curve is apparent everywhere and polarization disappears.

And I don't know where one sees non-normal preference patterns. Regulation is a great example -- very few people believe in zero regulation, and very few in extreme regulation. People overwhelmingly prefer a moderate amount. Of course, whatever the objective measurement is, you may need a logarithmic scale or something else for it to show up as truly Gaussian -- as well as not limiting the range of responses. But on any modern issue of society-wide disagreement, one virtually always finds it to have a single central peak (as opposed to two peaks or more, or increasing/decreasing monotonically).


I wonder how much data about a person can you squeeze out of a high-definition video, analyzing not only face, but grimaces, tone of voice, rhythm of speech etc.

That should give you a lot more hints than a still image.

The worst possible uses: filtering out undesirable people applying for jobs or for college, firing people who are suspect of belonging to a different political tribe.


It seems strange to me that the authors of paper seem to have made no attempt to determine which factors the model is using to differentiate between images. Or even to ask the human comparisons how they obtained their greater than chance results. Instead, both the humans and the computer model seem to be considered to be considered perfect "black boxes", which give entirely inscrutable answers.

It's not that it's always trivial to figure out what's happening inside a model like this, but I'm surprised that no attempt seems to have been made: modifying images and seeing which increases the scores, bringing in a co-author who actually understands what the model is doing, seeing if they can obtain similar results with a more interpretable model. Am I wrong, and in fact fining out what's actually happening is impossible, or of no benefit?


May be the world polarising into two kind of extreme polarisization: one with a sense of humor and one without it. And it has a noticeable effect on the face. I'd love to see few samples of faces from each class.


I'm incredibly curious which political group has a sense of humor in your mind. Because I'm going to guess most folks would say, "my political group".


Can you read my mind?!? ::Looks at you with suspicion::

In all seriousness, I haven’t followed closely, so I don’t know who hates Dr. Seuss lately, they are the baddies.


I think it's yet another situation where everyone is escalating hard from "hey, there are some stereotypes in this particular media that are problematic when presented uncritically".

That initial idea somehow turns into, "The left hates Dr. Seuss" and wild slippery slopes about book burning.


The left is literally pushing for the banning of a childhood classic with no racial problems. No slippery slope needed.


That isn't true. There was a scholarly paper that came out in 2019 observing that a few of Dr Seuss's many children's books included racial stereotypes that are dated to the point of being offensive, and that Geisel/Seuss had also produced a lot of adult cartoons that were overtly racist, a complicating factor for educators who used his books to to educate children about discrimination.

https://sophia.stkate.edu/cgi/viewcontent.cgi?article=1050&c...

Recently the publisher announced it was going to stop selling those - I think it was 6 books out of 107 in the catalog. There was no pressure campaign or wave of outrage driving it.

Perhaps you should choose better news sources, as the ones you are using seem to be serving you poorly.


Ebay is banning accounts that sell Dr. Seuss books. Fact.

The outrage over racial overtones came from a left aligned organization. Fact.

It was carried on and cheered on by left leaning people. Fact.

People on the right don’t use words like “problematic” and “racialized undertones”

That’s on you guys.

This is from The horses mouth:

“ Six popular Dr. Seuss books — including And to Think That I Saw It on Mulberry Street and If I Ran the Zoo — “will stop being published because of racist and insensitive imagery, the business that preserves and protects the author’s legacy said Tuesday,” The Associated Press reported.

“These books portray people in ways that are hurtful and wrong,” Dr. Seuss Enterprises told The Associated Press in a statement marking the late author and illustrator’s birthday.”

^ The above is just stupid.


You can add the word fact after a sentence, that doesn't make it a fact. Fact. ;)


I thought that adding factual and documentary context in a polite way might elevate the conversation, but it looks like you prefer being rude and angry.


I didn't mean to be rude, I enjoy the discourse. Thanks for engaging.


You’re right about me saying fact not making it a fact. Fact. (Oh god I don’t think I can stop now...)

However, I just shared a bunch of open source facts with you. All sourced from left leaning sources as to be somewhat unbiased (even though my bias on this subject is probably pretty heavy)

This isn’t about winning an online argument, it’s about a call for reason when things have become so silly that we are a step away from the Grinch being canceled because his big nose looks Jewish and that’s Nazism or something.

Not one single neo Nazi has ever been uplifted and supported in their beliefs by classic children’s books.


This made me think a bit - I imagine most of the polarization these days comes from a vocal minority of either side, and that 95%+ of people are much more relaxed about politics (and still have a sense of humour) than we imagine from the media.

So it would be interesting to see if hyperpartisans could be separated from moderates by their appearance (just to be clear, I dont believe in phrenology, I imagine its information leaking from other aspects of presentation and background).

With respect to humour, the modern lack of humor seems to focus on not offending people, which I admit I dont understand but I could guess is predicated on the feeling that laughter is somehow equated to divisive mocking, as opposed to fun. I bring this up only because it reminds me a lot of Umberto Eco's "The Name of The Rose" where the 13th century religious leaders were arguing that laughter was inappropriate, and Jesus never laughed, somehow rooted in the belief that finding humor in things admitted the possibility of laughing at aspects of religion and therefore not taking them seriously.

Anyway, your comment made me think, so thanks.


Both the far left and far right have no sense of humor because both have lost all sense of reasonableness.

On the right you have Trump cultists who loyally cling to every word he utters, regardless of it is true or not, spreading lies and misinformation on social media about voter fraud, vaccines, etc. You also have covid denial on the right where "the constitution" is used as a bludgeon to resist any reasonable public health policy like mask usage.

On the left you have the wokeness mobs that go around trying to cancel/censor/destroy every historical figure/book/statue who didn't live a perfect life and destroying careers of anyone who says the wrong word or makes a bad analogy. You also have covid zealotry on the left where "covid deaths prevented" is prioritized above all else and we can't re-open schools, re-open businesses, visit grandparents, or otherwise get back to normal life (even after mass vaccination) until 100% of all remaining unknowns are known (even if it takes another 2 years of masking, zooming, and hermiting).

These are all very recent examples.


The far left is pretty good at making fun of the right for saying silly things like this, actually.


Or alternatively, two groups who each think the other has no sense of humour.


When are people going to stop rebuilding high tech phrenology?


When it will cease to be profitable or seemingly so


I just wish people called it 'correlates' instead of prediction since ML algo's often fall in the correlation category, not the predicting one.


Agreed. Without the addition of causal inference this is phrenology in ML drag. To their credit the authors say "Here, we explore correlations between political orientation and a range of interpretable facial features. . ."

https://en.wikipedia.org/wiki/Phrenology

https://www.goodreads.com/book/show/36204378-the-book-of-why


Is there some technical definition of “predict” that you are assuming?


Predict can give the false impression that a causal relationship has been defined when you might only have correlations.


So, unless I read the abstract wrong, this is identifying age, race, and gender, then making categorical qualifiers based on those distinctions. While using ML to do the facial recognition to distinguish a person's age, race and gender is neat - categorizing their political affiliation with a 72% accuracy rate is fairly nominal, given the tools used by modern parties to garner donations and directed online advertising - no offense.


"VGGFace224 was used to convert facial images into face descriptors, or 2,048-value-long vectors subsuming their core features."

So they had more than age, race, and gender, but it doesn't really say how things were weighted.


I don't really buy this study. If humans can't get a better accuracy than 55%, I'm convinced this vector is leaking some obvious (maybe high-frequency) information that doesn't have anything to do with the face. E.g location of the person.


The 55% is very fuzzy. They cite https://www.researchgate.net/profile/Konstantin-Tskhay/publi...

The only relevant 55% that I can find is:

> Allport and Kramer (1946) randomly presented 20 yearbook photographs of Jews and Non-Jews to 223 undergraduate students for 15 s each and asked them to categorize the person in each photograph as Jewish or non-Jewish, or to pass on the trial by indicating a lack of knowledge. The reported median identification for the sample was slightly above chance (55.5%; Allport & Kramer, 1946). Moreover, they found that highly prejudiced people were more accurate at distinguishing Jews from non-Jews

So it's not the same set of photos and not the same question.


When I read a paper like this I'm looking for four things: (1) the data, (2) the benchmarks, (3) the architecture, (4) the controls/ablation.

1. The data:

"We used a sample of 1,085,795 participants from three countries (the U.S., the UK, and Canada; see Table 1) and their self-reported political orientation, age, and gender. Their facial images (one per person) were obtained from their profiles on Facebook or a popular dating website... Facial images were processed using Face++37 to detect faces. Images were cropped around the face-box provided by Face++ (red frame on Fig. 1) and resized to 224 × 224 pixels."

2. The benchmarks:

"For example, when asked to distinguish between two faces—one conservative and one liberal—people are correct about 55% of the time."

3. The controls:

"What would an algorithm’s accuracy be when distinguishing between faces of people of the same age, gender, and ethnicity? To answer this question, classification accuracies were recomputed using only face pairs of the same age, gender, and ethnicity."

A. A complaint:

Geography and income are two powerful conditioners. These can leak in so many ways: uncropped background (geography), image color and quality (income), eyeglass shape (geography and income). This study really needs more controls. Geography and income would be a nice start.


What stood out to me was

> Their facial images (one per person) were obtained from their profiles on Facebook or a popular dating website

so of course the first thing to comes to mind is "how good of a predictor is just knowing which of those two sites the image came from?"


Geography and income are two powerful conditioners. These can leak in so many ways: uncropped background (geography), image color and quality (income), eyeglass shape (geography and income). This study really needs more controls. Geography and income would be a nice start.

But then the data wouldn't represent the natural world: nature as it is.

Raw data is the correct thing to use, because it's what a hypothetical other person would also use if you ran the same experiment yourself.


Uh, the headline claim is about faces, how does it make sense to then insist that you must leave the background in?


This reminds me of an early ML study about detecting skin cancer from pictures with a high accuracy rate.

The problem was, that with the ML, they ended up building a ruler classifier, because most of the pictures with skin cancer happened to also have a ruler in them to measure the size.


Or the commercial model that identifies criminals from their photograph. Turns out people who frown are criminals. People who smile aren't. Or so you'd believe if you anchored your expectations comparing mug shots to social media profile pictures.


That wasn't the claim. The claim here is that we should scrub certain faces from the dataset in order to change the dataset in a certain favorable way.


No that's not the claim. A control is to understand how your model works, it's not what you release as the final product.


It would be nice to see a logistic regression using at least some of the features known to be useful (including geography and income).

That way we can see how much of the performance is from magic AI pixie dust, and how much is from basic 19th century statistics.

Every time I read a paper like this, I have this Margaret Mitchell talk [1] in the back of my mind.

[1] https://youtu.be/XR8YSRcuVLE


Yep, these papers don't usually pass the sniff test. My bet is you can predict the phone brand from the camera grain and that correlates with geography & income.


I hope you're right, otherwise we're in for some kind of AI phrenology nightmare in the not too distant future.


We're pretty much there today.


except for the phrenology part :)


I have built ML workflow for over a decade now. The amount of hogwash I've seen hocked definitely classifies as phrenology :)


Apart from the 55% human accuracy, which apparently comes from a completely unrelated study, the bit that really stands out to me is it reports the accuracy only drops from 72% to 68% when controlling for demographic accuracy in the US (a little more noticeable in the UK). Considering demographics alone gets you 60-90% accuracy on voting intention for many US demographics, it strikes me as extremely odd it would have have so little impact on the model.


My political leanings change based on how recently a drank a cup of tea. This obsession with a left-right spectrum is a big part of the problem now.

You think teachers are underpaid? Oh obviously you must be a pro-abortion, $15 minimum wage supporting, transgender-rights activist.

What's that you say, Christian bakers should be allowed to refuse to bake a cake with a pro-gay message on it? Oh, you must be a gun-toting, pro-life, anti-immigratnt Trump fanatic.

This kind of sorting people into simple binary categories, and giving them a "shopping bag" full of opinions they're supposed to hold helps nobody.

I'm not sure how this was relevant in anyway to your comment, but I just kinda jumped on my soapbox there.


It might pick up on scars, tattoos and piercings, testosterone level, diet (in particular weight), bags-under-eyes, glasses.


>So they had more than age, race, and gender,

Doesn't have to be that way. It could be age, age+1, age+2 ....


I believe that these descriptors are created only based off the visual image:

A face descriptor is obtained from the learned networks as follows: the centre 224 × 224 crop of the face image is used. The shorter side is resized to 256, and the CNNs descriptor is computed for this region by extracting the deep features from the layer adjacent to the classifier layer. This leads to a 2048 dimensional descriptor, which is then L2 normalised.

https://www.arxiv-vanity.com/papers/1710.08092/


Does anyone know what the accuracy of a prediction is if you use only those three factors -- age, race, and gender?


This is the exact question I came to the comments to find.

The abstract states:

>Accuracy remained high (69%) even when controlling for age, gender, and ethnicity.

To give some context, chance is 50%, human guess is 55% and a 100-question questionnaire is 66%.

Personally, I am surprised that the accuracy remained that high when controlling for the three variables I would have considered most telling in the determination (age, gender and race).

I'd be very curious to know what exactly the algorithm is determining from the face photos outside of those obvious variables. I know with a ML algorithm it's practically impossible to determine why the classification was made, but does anyone human here have any thoughts?


In fact, I'd put it another way. I'm surprised the accuracy was not higher when you ADDED IN the three variables to the 69%.

Could it be a version of this: https://hackernoon.com/dogs-wolves-data-science-and-why-mach...


You can basically determine this by looking at voter/exit poll results, and any single criteria would give ~55% at best.


All numbers are Biden-Trump in 2020:

People under 30: 60-36.

White men: 38-61

Black women: 90-9

So there are definitely some strong predictors there.

Source: https://www.businessinsider.com/2016-2020-electoral-maps-exi...


[flagged]


I was not commenting on the main article about the research, but instead on the comment by user karaterobot that I am responding to which -- if I am interpreting it correctly -- asks the question how good one can guess purely based on those 3 demographic axes.

As you note, the researcher's predictor appears to do better than random even controlling for these obvious demographic skews, which is fascinating.


Actually, you have to answer something more from the other side: for a randomly picked person (voter), i.e. considering the distribution of race, gender and age in the population: how likely is a guess just based on these factors correct. The number you come up with might actually not be as far from 50% as you would expect.


They tested this question specifically:

> Both in real life and in our sample, the classification of political orientation is to some extent enabled by demographic traits clearly displayed on participants’ faces. For example ... white people, older people, and males are more likely to be conservatives. What would an algorithm’s accuracy be when distinguishing between faces of people of the same age, gender, and ethnicity? To answer this question, classification accuracies were recomputed using only face pairs of the same age, gender, and ethnicity ... The accuracy dropped by only 3.5% on average


From there, it would seem that cues might come from how 'kempt' they appear, whether the head shot was from a party or for a resume, perhaps color and style of clothing, ... I.e., maybe not strictly the face.


"To minimize the role of the background and non-facial features, images were tightly cropped around the face"

Though cropping can only do so much.


I suspect there are a lot of less obvious things they'd need to control for. Off the top of my head, weight would be an obvious one; in developed countries urban areas (particularly large urban areas) generally have a lower average BMI than rural and suburban areas, and there's also typically a major political difference between rural and urban areas.


But wouldn't this be a reasonable feature used by the classifier to reach its conclusion? They can't control for everything, it would become meaningless.

I think the questions about age/sex/ethnicity are sensible in that it's a valid question to ask whether it's just doing the naive/obvious thing or something more. But if you keep on removing the less obvious things then of course you'll reach a point where it's no better than a coin flip because it's basically comparing blank pictures.


If the total population sampling is the same, I would expect the accuracies to remain the same. E.g. if I can get 72% accuracy in the total population just by looking at age/race/gender, doesn’t that exactly mean the accuracies in each individual category are on average 72%?


Not necessarily because each age/race/gender tuple can be present in the test dataset different amounts, and either be a stronger or weaker indicator to the model.


Still, it’s some kind of weighted average, right? Like that 3.5% drop seems to say more about the test data used than the model performance per se.


So now I want to know 1. what is the eigenface for each of the two clusters? What does the perfect liberal look like? 2. What is the political orientation of https://thispersondoesnotexist.com/


> These self-selected, naturalistic images combine many potential cues to political orientation, ranging from facial expression and self-presentation to facial morphology.

I was going to ask how they controlled for the presence of compound bows & deer, trucks, and tank tops in the photos, but this implies they did not. Another one would be lighting palette, since that's going to be biased to regions as well. There is something to be said for it, as I'd say I have a %72 chance of guessing someones political orientation by looking at them as well, which someone once explained to me as being the effect of testosterone levels on the region around their eyes, but that sounded like folksy bro science.


> I was going to ask how they controlled for the presence of compound bows & deer, trucks, and tank tops in the photos, but this implies they did not. Another one would be lighting palette

No need to read into implications. From TFA:

"The procedure is presented in Fig. 1: To minimize the role of the background and non-facial features, images were tightly cropped around the face and resized to 224 × 224 pixels."

While tightly cropping the face isn't perfect, it does address three of the four things you were specifically wondering about.


I was going to ask the same about colored hair, facial asymmetry, hammer and sickles on shirts, and scowls.


How would we collectively react if ML proves accurate in classifying anti-social traits or worse ? Mythomania, narcissic perversion, pedofilia, etc ?

Employers, condos, schools, many communities will want to ML-screen candidates, be it officially or not..


We already do it, at least subconsciously. We’ve also developed a side of society who actively tries to counter it, by actively seeking out-of-the-normative profiles: The skater look in companies is now a thing that helps you get sympathy, being female opens up sone avenues, Atlassian’s CEO wearing a mohawk or Jack Dorsey’s looks are all symbols of a society which started searching for non-normative people.

Perhaps we’ll require AI to do the same. Otherwise AI will be an excuse to be racist, saying “It’s the stats!”.


OK but attributes like the mohawk are 'playful' in this setting. It distinguishes its bearer by making him look aggressive while still being a good, collaborative member.

On the opposite, ML could indicate that in spite of one's hippy looks he really is a potential ruthless monster.


Ethical AI seems to be a work in progress.

https://www.cnn.com/2021/02/19/tech/google-ai-ethics-investi...


It's an easy trick when there's only two options: red and blue.

- The rest of the word.


Then going long or short on a security is easy?


Sure, you could probably train an AI to get 72% accuracy on that as well. It would be just as meaningless.


I haven't read the paper yet, but it's not that surprising. I think it's about sociology more than biology.

In the US, African Americans vote overwhelmingly for democrats for example. Skin color therefore becomes a very good predictor of political orientation. You can probably extend this to states being populated from various migration waves, say 'people who look like Danes vote for republicans because state X was populated by Danes and votes Republican' . Carry this across generations/education, and you may have an explanation.


Assuming this analysis has no problems with method, it's likely that the separation of any two groups on a continuous spectrum cannot ever reach 100% accuracy. In fact for such multivariate groupings as opposing political persuasion, 72% may in fact be as good a score as is possible.

Representing the conservative and liberal groups as gaussian mixes of multiple atributes, I would expect those two peaks to overlap. Perhaps the real surprise is that they overlap no more than 28%.


That headline is a bit misleading. The facial recognition has to see two faces known to have different political orientations in order to deduce which face is which. The 72% classification correctness is about what one would expect if only 25% of persons had visible features that identified their political orientation and the other 75% were completely inscrutable.


The problem wit this is bad interpretation, people look at it and think oh this means political orientation is somehow inherent to biology or some such nonsense. Any data that can make predictions is just a sign that in this data there are some patterns, but there is never a straightforward interpretation for it.

Also, whoever makes the first app for this will go viral 100%.


So... now, is it because people that came from the same regions a long time ago all happened to tend towards a certain political leaning and that just has stayed within families for that long?! Like a culture thing, but within families?

In general, though, its amazing how little control we have over who we are despite the feeling that we are in control of it.


Could be but not across the board true. My parents are deeply liberal to the point of the Constitution be damned so long as their views are enacted in some way. I am just the opposite - a constitution conservative. I am aware of several friends families that are the same or opposite.

I think the key to you last comment is how much intellectual control someone has over their emotional response. I like to think I am pretty good with that at this present time. That was not always the case and may not always be the case.


Isn't it possible in many ML models to then synthesize of the images of the features that most strongly predict the orientation? Like an archetypical "conservative" or "liberal". Could this help identify if the model is picking up on something like facial expression, or facial hair?


> "Accuracy remained high (69%) even when controlling for age, gender, and ethnicity."

So if I just assign the majority label to all of the population of a given demographics group, I would get the same result right? i.e., predicting "left" for all minorities under 30. You would also get ~70% accuracy.


What do you think 'controlling for age [,etc]' means?


Please state what you want to say. No need to be passive aggressive.


You quoted the part about controlling for age, then described the sort of mistake that comes from not controlling for age. So I would like to know what you think it means, in order to meet you where you are. I'm not expressing aggression toward you.


My interpretation of control is fixing all other variables (the ones they mentioned) except for the one being measured (political orientation). If that's not what they did I'm happy to learn.


In that case I don't understand your original comment, as it describes the sort of mistake that arises when you don't control for other factors, but you appear to accept that they did.


My original comment said:

> So if I just assign the majority label to all of the population of a given demographics group, I would get the same result right? i.e., predicting "left" for all minorities under 30. You would also get ~70% accuracy.

I meant that even if you control age, gender, ethnicity, a very trivial predictor (i.e., always predicting the majority label) could yield similar performance. What I meant to say was that their model may not perform as well as they made it sound.


Reality please don't bite me. This will probably be an Oxymoron :)

An interesting question, to me, is: how would you (personally) react if such an controversial and stigmatized factoid would be proven true at more than 5 sigma?

Would you be shocked? Or would you accept the outcome? And which of both would tell what of yourself?


The predicted 'answer' for the vast majority of people is going to be, basically, left or right (politically speaking), for maybe 80 or even 90% and more people, a 50/50 choice. How far does the 72% drop for correctly predicting the smaller political factions, I wonder?


Haven't seen this mentioned yet, but ResNet-50 is an old model. I would probably expect multiple-percentage-point gains from using a better (or honestly just larger) architecture and better training methodology.

Throw an ML engineer at this task and you could probably do * way * better than 72%.


This is a fascinating result. On the other hand it is also telling .. and maybe dangerous? If we can predict political orientation, can we predict if someone is gay? If he is jewish? What comes next?

Would't this legitimate all these voices who ever said I can tell you by just looking?


And I always thought it was funny how in classic books from authors like Jane Austen they’d always talk about a person’s physiognomy, and how you could draw strong conclusions about them from it. Maybe it wasn’t so bogus after all?


I think there's quite likely something to that. I seem to have better-than-average intuition in this area and it's handy, but not reliable enough that I'm willing to build causal explanations on it.

Sadly, uncritical application of physiognomy has led to many bad outcomes involving discrimination or institutionalization, and the ideas are often used to prop up racism or other sorts of prejudice, so it's not much different from pseudosciences like phrenology or palm reading. Consider too that in Austen's time manual labor was much more common and social/job mobility much lower, so there were a lot of subtle clues that could be picked up from someone's appearance. Think how many folk tales center on someone's ability (or not) to cross social boundaries by modifying their appearance.


Is the model available? Would be interesting to see how it classifies members of Congress, governors, state legislators, etc. Assuming politicians weren’t used for the training set? (Also assuming Kanye wasn’t either?)


I can see making inferences from dress, makeup, style, etc. But, this is surprising; “facial morphology”

This skims somewhat close to people who tried to infer criminality from facial and cranial characteristics... a century ago


I don’t think it’s crazy, what if rural areas (which lean conservative) have biased ethnicity? For example, if the Irish, Italians & Asians mostly immigrated to urban areas while Germans and French mostly immigrated to rural areas, then if you can guess the ethnicity from facial structure you can make a guess on political leanings.


A century ago, phrenology failed.

Would it fail if you added a modern AI? How about if you had an MRI that provided more-direct information for the AI model? Feeding actual brain structure data into an AI is far more sophisticated than measuring heads and feeling for lumps.


But, this is surprising; “facial morphology”

You could probably get this from how fat their face was, or how unhealthy their complexion, e.g. https://www.vice.com/en/article/j5e3z7/gym-bros-more-likely-...


theory: seeing as these are pictures that were very specifically "taken" in a given context - with a camera, by someone, for something (dating, facebook, etc.), would it be possible that the algorithm is picking up differences more in how people relate to those conditions, vs facial morphology? For example in the example photo, i would have also guessed that the model was a liberal. But I think it was more about the something in the WAY she was smiling and posing for the camera, and i guess also her specific grooming - hair and makeup.


I wonder if accuracy for women is higher? I'd expect so, because women, via makeup, tend to have more cultural information encoded on their faces.


Is it also 72% accurate for income level?

I think the incomes of people on the right skew lower, or they have fashions for makeup, facial hair etc. I'd be interested in an adversarial attack on the classifier.


> Had the political orientation estimates been more precise (i.e., had less error), the accuracy of the face-based algorithm could have been higher.

That is an extremely loose assumption.


I am curious how much additional information, if any, is learned from a person's face. From a full-body photo, with the face blacked out, what can be learned?


They claim facial morphology was one of the predictors. Pretty wild if true, but how does genetics / heredity play into political leanings?


> Their facial images (one per person) were obtained from their profiles on Facebook or a popular dating website.

Not sure I’m entirely comfortable with that.


Perhaps conservatives and liberals upload different kinds of face images to the internet.


Not the issue.

The issue is potentially being included in a study unintentionally, that is impossible to anonymize, without prior knowledge or consent.


The article discusses the underlying correlations at work but I think the title is a bit sensationalistic. Would expect more from Nature.


Note that it's not published in the journal Nature, but in another (lower-impact) journal of the same publisher (Scientific Reports).


My political orientiations have changed a few times over the last 10 years but my face hasn’t. This can’t actually work...


First, your face has changed over the past 10 years because of the aging, the way you take care of your face, your health, your mental and emotional state, your sleep, your nutrition, and many other factors.

Second, the model in the paper did not only look at the face, but at the entire photograph. How you choose to present yourself on a dating site has also changed over the 10 years: the quality of the photo, what haircut you choose, how closely you shave, whether you're tanned, your facial expression, whether you wear glasses, what objects, landscapes, and colors can be seen in the background, etc.

Finally, and most importantly: Even if your dating site profile picture hasn't changed in lockstep with your political leanings, that's fine -- they can make some errors and still get the 72% accuracy they report.


72% of the time, it works every time...


You think your face hasn't changed over the last 10 years?

You think there isn't a relationship between age and politics?


You’re right that I’ve gained a few pounds and look older. Obviously I was exagerating to make a point. But I guess it’s all about the 72% which means it far from perfect, but still better than a coin toss.


> Accuracy was similar across countries (the U.S., Canada, and the UK)

This sounds more like people who are from minority groups (or who look like they are, to a computer vision algorithm) are more likely to agree with left-leaning policies, which probably has more to do with the policies in those countries than it does with any sort of genetic features. I feel like this might not work as well in non-Western countries, for example.


"Accuracy remained high (69%) even when controlling for age, gender, and ethnicity."


So your race accounts for less than 3% despite 89% of all black people voting left, that seams strange.

I did not do the full math but just the ballpark numbers I entered in to my calculator says that it should account for about 11%.

* 89% according to numbers from nbc

Edit: So say I assume that every African American person I see votes left. 13% of the population is African American 89% of them actually vote left. 13% * 89% = 11% Then I simply guess on all non African American a 50/50 shot. I should then be right 50% + 11% = 61% of the time.


The study looked at political orientation (liberal versus conservative or left versus right), not political affiliation (Democrat versus republican). 89% of African Americans vote Democrat for complex historical and sociological reasons. But they have a large diversity of political opinions: https://press.princeton.edu/ideas/the-roots-of-black-politic.... About 30% of Black people today identify as conservative, versus 10% in 1970. But Democratic Party affiliation has been in the 90% range throughout that whole period.


Hum i take this back the calculation should be 0.13 * 0.89 + 0.87 * 0.5 = 55% total accuracy


Altough random guessing is 50% right?


Is this surprising?

We all know there is a demographics/ethnics factor in people's political affiliation.


Has there been any attempt to do the same kind of work for IQ prediction ?


This is hilarious study. This is like Zuckerberg calling us dumbfucks.


Sigh - modern day phrenology, but wrapped in AI. Just what we need :p


What's a useful application of this, other than more advertising?


Dating app that constrains to like minded potential partners, instead of relying on political signaling in profiles.


We know incest increases the likelihood of genetic disease. I wonder if a dating app could make a "faces too similar" filter to reduce chances of genetic disease in offspring based on just faces.


Having kids with a cousin is about the same genetic risk as a woman over 35 having a child.


More echo chamber...


Can you explain to me how that’s a bad thing? Differences among people do not make them more United and their bonds stronger. Literally the opposite is true.


It's a terrible thing that politics influence this at all. A few months ago I was discussing this with my SO and we came to the conclusion that ~5 years ago neither of us would care about each other's political stances. Today, politics have infiltrated literally every single facet of life and became a mania of sorts for many even non-political types. Ignoring it is difficult to say the least. With that said, political differences, in my humble opinion, is a great thing as long as they're not radical and within normal realms. But, in a polarized climate we live in now radical political views are all the rage and it'd be very hard to dismiss in a relationship.


> Differences among people do not make them more United and their bonds stronger. Literally the opposite is true

I tend to agree. It's why I find all of the recent "Diversity is our strength" stuff so puzzling.

I'm not against diversity, I just recognize that it generally results in at least some interpersonal challenges to overcome, not necessarily unity.


You have been found to have committed wrongthink by an automated bot. Please prepare for unpersoning. Have a nice day.



Former Kaiser of Austria-Hungary, Franz Joseph, had a personal motto of "Viribus Unitis". That means "With United Forces". It was also used by A-H military.

In practice, the multicultural empire was greatly weakened by incessant nationalist bickering.

Once you have to declare that X is your strength or something similar, it most likely isn't.


Good counter-example.

I would argue that Diversity is necessary, but not sufficient.

("E pluribus unum" is -of course- the motto of the United States of America. Not the least of nations!)

"If we all reacted the same way, we'd be predictable, and there's always more than one way to view a situation. What's true for the group is also true for the individual. It's simple: Overspecialize, and you breed in weakness. It's slow death. "


And as America becomes more different from one person to the next, how is that whole “melting pot” idea working?

Multiculturalism is dead.


Because the commonalities that exist between people help them overcome their differences, understand each other better and reflect on themselves.


I’m positing that more differences = weaker relationship. Are we agreeing?


For sure. I was just stating why echo chambers are bad. In the context of a dating app, I think dating right now suffers from a totally diffent and unrelated effect where the pool of potential partners has become so big that people have unrealistic expectations in what they want in a partner. So in that context I actually think anything that narrows the list of potential partners is a good thing.

But still, it's yet another facet of our lives where we deal with disagreements by putting them out of sight rather than communicating and understanding.


That depends a bit. Differences can complement each other.

Some people might be better at some things, while others are better at other things. (thus combining strengths)

Some people might actually be really bad at some other things yet again, while others could happen to be very good at them (thus covering each others weaknesses)

When you have people with different backgrounds and training all working together you can do things together that you wouldn't otherwise be able to do separately.


Well, it's an app for dating, not an app for finding "change my mind" political debate; so being an "echo chamber" is kind of what's desirable for that app.


Finding potential dissidents?



Understanding why it is predictable and using that to either exclude ossified groups from targeted campaigns to save resources or focusing on them more tightly if they’re swing.


Surprising, yet not surprising. As the article mentions:

> Both in real life and in our sample, the classification of political orientation is to some extent enabled by demographic traits clearly displayed on participants’ faces. For example, as evidenced in literature and Table 1, in the U.S., white people, older people, and males are more likely to be conservatives.

Most people can predict a person's political orientation of their own country with >50% accuracy as well just by looking at a face. Black or latino? probably liberal. Old white person? probably conservative. If you can see more than just their face it's even easier (wearing religious paraphernalia? LGBT paraphernalia? etc?)

What I thought was interesting was:

> The algorithm could successfully predict political orientation across countries

I was under the impression that "liberal" and "conservative" had different meanings in UK vs. USA so how could it do this?


> I was under the impression that "liberal" and "conservative" had different meanings in UK vs. USA so how could it do this?

I assume they're using the US definition (meaning "left wing" and "right wing", more or less).


> The highest predictive power was afforded by head orientation (58%), followed by emotional expression (57%). Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust. Facial hair and eyewear predicted political orientation with minimal accuracy (51–52%).

This is really interesting. I never considered head orientation or expression to be a factor. Then again, it sorta makes sense. Speaking very generally, liberal leaning people on social media probably tend to be more likely to post pictures of themselves in a humorous or "soy face" expression, and conservative-leaning types may try to look strong or aggressive.

Also, I automatically assumed features like "handlebar moustache" would mean more likely to be conservative, but it sounds like facial hair wasn't as big a factor.


A data point suggesting bias is more accurate than a coin flip...


Too much close to a Phrenology 2.0 IMAO


If you can guesstimate someones age bracket, gender, and whether or not they're white you can also probably hit a similar accuracy.


Who knew. Being a progressive/moderate makes people more attractive...and improves your sense of humor.


So basically not at all.


could we please not


As someone who's studied facial expression in cartooning and tried to track down how that's done through face muscles, this tracks.

Not only that, it is hackable, and somewhat mutable… and I think that can reflect back into one's general attitude on life. I'm going to share some personal notes on my own face hacking done to serve my purposes as a youtuber and open source coder…

The key phrase here is, "The highest predictive power was afforded by head orientation (58%), followed by emotional expression (57%). Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust." People will respond to you based on what your face is doing, and be more or less favorably disposed to you if you 'match' where they're at.

Guy Kawasaki's on record as trying to maximize his ability to Duchenne smile (crinkle the outside edges of the eyes as your cheeks go up) in order to better influence others. You can make special efforts to dry your skin there, the better to form heavy wrinkles that can come into play, signalling affability and well-disposedness as a proper Duchenne smile would do.

But there's another area. If you fret a lot, or glower, your brow comes down and wrinkles form where your brow meets your nose. This signals suspicion, disgust, hostility. My face-hacking involves putting Nivea cream there and on my forehead, keeping that skin more flexible and mobile, for a more open affable look. But if you're targeting a conservative audience you can do the opposite: look at Tucker Carlson sometime. You can cultivate a world-weary scowl and it will increase your trust with people sharing a similar facial expression, and tend to concentrate your viewer's expressions into ones similar to your own (while you tell them scowl-worthy things), so long as you have their basic trust to start with.

This is all very malleable. Very hackable. You can do it on purpose. I don't know if Tucker Carlson does scowl exercises, but I know if he botoxed his brow scrunch, he would be less effective as a political commentator, because he would be telegraphing the intended reaction to his information more weakly.

We're looking at a general connection between human resting facial expression, and human overall outlook on life. I didn't expect to run across this study but I find it absolutely plausible. Almost axiomatic. You can even frame it in ways that appear to favor one political side or the other, but the underlying principle tells us a lot about how political orientations arise.


"only"


[flagged]


That flamebait at the end led predictably to a flamewar. Please do not post like that again.

https://news.ycombinator.com/newsguidelines.html


[flagged]


The GP shouldn't have included that flamebait, but please don't respond to flamebait by bursting entirely into flames. That's going the wrong way down a one-way street, since we're trying for exactly the opposite here.

Of course it's a highly emotional topic and justly so, but one of our intentions here is that we all work on self-regulation around this kind of thing (e.g. processing strong reactions internally before rushing to comments) - not really for ethical reasons, but just because it's the only way to avoid the failure modes of internet forums.

https://news.ycombinator.com/newsguidelines.html


fair enough. I am still shocked by the indifference of the other comments though.


>Downvote me again if you want

no problem, any time

HN guidelines: "Please don't comment about the voting on comments. It never does any good, and it makes boring reading."

I agree, and I vote accordingly.


[flagged]


nope


What on earth makes you think calling anyone anything is off limits?

I'll call a spade, a spade. Any day of the week, week of the month, or month of the year. Not being willing to do so is a disservice to everyone around you, and even to the person in question, who may be caught in the Nietzchean transformation into the very monster they putatively fight; something even the people of Israel may be served well to remember. It's one thing to be besieged on all sides, it's another to ante up atrocity on top of atrocity.

You don't get to claim you aren't the monster when you're doing the Same. Bloody. Things. It isn't different this time. You aren't justified doing it, and we've all seen where this movie goes before, your attempts to control the narrative aside.

An African American executive espousing abusive workplace policies targeting a disadvantaged or otherwise unable to realistically defend themselves group of workers is a fair game target for being called out as a slave driver or plantation owner. A people espousing the employment of phrenological methods for the purposes of undesirable population control are a dead on ringer for the Nazi political regime circa 1940. No it doesn't dilute the message. It reinforces that evil is often seductive and insidious in it's tendency to convince even good people that they are doing the right thing when by all objective definitions they most certainly are not.

You, sir, would be well advised to think through the consequences of your pleading. To minimize the usability of highly disambiguously responded to subject matter is to unfasten society's overall moral compass by atrophying the ability to sense the fundamental ways in which the evil that drove that atrocity was itself a composite of goods to the people doing it.

I hear you, and I in no way do not feel sympathy for the victims of those atrocities. Quite the opposite. I have a near all-encompasding moral mandate to ensure that the confluence of circumstances, and lack of accountability holding, or voices of reason that led to such vile rhetoric taking root never happens again. I will call out BLM for their civilly destructive tendencies, and lack of restraint of those that step out of line at their protests; I will call out the woke movement for doing the same thing you espouse, thus ensuring a fields of ignorance and uncommemorated infamies evils for previously experienced to grow and burst forth out of.

If people weren't so prone to doing evil things thinking they were doing the right thing, nobody would have to call them out.

Evil, just as hope, springs eternal. It is our duty as the gardeners of our species collective morals to be on top of it. That does not happen by locking the ugly away in a box or by ignoring it. What you ignore has just as much of a positive consequence as what you willfully do. The act of willful ignorance is the one great unforgivable intellectual sin.


Declaring a group or philosophy "off limits," and to say it "has no parallel," seems certain to eventually invite just such a parallel. Hitler didn't wake up one day, speak an incantation that resulted in possession by an eldritch creature of unimaginable evil, and then carry out plans for dehumanizing world domination. Dehumanizing the other, authoritarian fascism, and expansive conquest are common themes in human history, and what sets that era and group apart is more the confluence of all of those things with the time that technology made more horrific things possible than previous such events, and more easily documented than previous such events. Even in that era, Hitler wasn't alone among the Axis power to engage in barbaric atrocities, which only focusing on Japanese prison camps in mainland China makes clear.

So no, absolutely not, those subject should never be off-limits. Already your desire to mythologize the holocaust may have caused you to overlook what was happening halfway around the globe. Certainly treating the holocaust as a special case that could never be repeated seems guaranteed to cause people to miss warning signs as those same principles rise in popularity again, as history shows they have and do and will.


If there is a reason to draw a parallel, then why not? Well maybe in America but the rest of the world is not taking part of these oppression Olympics.


> The evils Hitler inflicted has no parallel.

The evils Hitler inflicted has plenty of parallels. It doesn't diminish how bad it was to acknowledge that there have been plenty of horrible people throughout history. Post WWII PR efforts made Hitler a cultural icon of evil. Let's not mince words. He was maximally evil by any relevant framework. But putting him on a different plane of moral existence is a disservice to those suffering from the evil of other, non-hitler, evil people.


> Casual comparisons like yours means that the lessons learnt from the holocaust will lose their power and ultimately be forgotten.

While I agree with the message of your comment (i.e. the critique of everybody using the word Nazi for everything they don't like), I feel you make another mistake here by - probably unconsciously - reducing Hitler's misdeed to the Holocaust. This is very upsetting to huge populations of non-Jewish people who lost many members of their families during WW2. Especially to Russians - they lost 27M as opposed to 6M Jews. I don't want to compare the horror of both numbers in any way, just want to mention this because I notice more and more many people seem to reduce the evil of WW2 to the Holocaust.


i took pains specifically not to do that. Nowhere in my comment did i limit Hitlers impact to Jews alone. You failed to read my comment properly. Any reference to the Jewish holocaust was in the context of the parent comment. Please read my comment again and don't shoehorn your preconceived notions into what i said. i was very conscious in my wording. One thing i took for granted was that the parent and his family were not victimised by the germans like other minorities. Or if he did, he doesn't care so much


Unfortunately the evils perpetrated by Hitler have many parallels. The human race has committed many acts of evil and extermination often along genetic lines. We owe the few victims who are still alive almost 80 years later our sympathy and our commitment to prevent future evil but we do not owe them our silence. These events are like it or not part of our common historical context.

I absolutely would accuse someone of color of imitating a slave,r if the shoe fits wear it. I have never been afraid to step on toes and I don't think society should fear to give offense if it gets in the way of honest discussion. Contrary to your assertion I think discussion keeps the events of the holocaust top of mind and relevant. Too many already deny it even happened.

Also complaints about getting downvoted attract downvotes.

On the primary topic of discussion do be aware that a Israeli terrorist detector might misidentify both Jewish people and Palestinians but who do you think would be subject to additional scrutiny or mistreatment? Such tools run the risk of attaching seemingly scientific justifications to our pre existing prejudices.


The lessons will lose their potency not because of comparisons like these but because most people don't particularly care about history and don't want to expend the effort of knowing history as opposed to other things they could be doing.

Usually, mentioning Nazism doesn't work well due to the way it invites flame-wars, to the point where Godwin's Law is well known. However, the specific irony they mentioned is in fact a valid observation: pseudoscience used by the state with grave consequences. I am saying this even though I myself have a strong pro-Israel bias.

Asimov mentioned this idea in a discussion with Elie Wiesel with reference to the treatment of the Edomites as described in scripture. It's an argument that has more to do with exploring the nature of political power itself than to demonize a specific group. It just so happens that in this case we have a group that has experienced one of the most poignant extremes of this phenomenon, hence why it comes up often.

Is the spotlight often placed repeatedly and with unfair frequency on Israel? I personally think it is. But that doesn't close off the topic on its own.


> You wouldn't dare accuse people of colour of imitating slave traders

Not just imitating, but being slave traders too, either in the past or even today. Plenty of tribes in Africa sold other Africans to Europeans, and slavery is still very much a thing in Africa today.

https://en.wikipedia.org/wiki/Slavery_in_Africa

https://qz.com/africa/1333946/global-slavery-index-africa-ha...


If coloured people begang doing things that very much looked like what others have suffered through as slaves they would most definitely be called out. Israel is not like the Nazis but they are one of the closest things we have seen since and not just a single fluke but again and again.


> If coloured people begang doing things that very much looked like what others have suffered through as slaves they would most definitely be called out.

They do, and they aren't.


I'm still waiting for a reply..


What are who doing and not being called out for?


I agree that Hitler was evil on a scale unlike any other. I also think comparisons should be carefully thought out and not tossed out carelessly. I do disagree with the notion that we should never compare to Hitler and the Nazis. To completely refrain makes them a one off, and treats them as if that could never happen again. I think the tools available to dictators today make the rise of someone or something like that more likely. So I think it is very important that we guard against that, and look carefully at the lessons of history. Again emphasizing we should be careful and not make comparisons callously.


> I agree that Hitler was evil on a scale unlike any other.

I feel it is a great danger to think in this way. If you disregard the others[0], you may overlook certain important aspects of these evil people - especially the factors that allowed them to gain popularity, to rise to power, and to actually execute their cruel plans. Focusing just on Hitler is myopic.

https://fee.org/articles/who-was-the-biggest-mass-murderer-i...


Hitler is a Saint compared to Stalin and Mao, maybe even the Kims in NK.


wow, I didn't said "bouhou Google are such Nazi omg". I'm taking about a very serious issue of a company practicing Scientific Racism on people.

All I'm saying is :

- Faception is accusing people of terrorism from analysing the shape of their face.

- Faception is pretenting some people have "higher IQ" (== is a naturaly superior race) based on the shape of their face

I'm not making this up, just check their website :

--> https://www.faception.com/

Pretending that some race are naturally dangerous criminal while other race is "higher IQ" is straight Scientific Racism.

One of the main example of scientific racism on Wikipedia is WWII : https://en.m.wikipedia.org/wiki/Scientific_racism

If you prefer another example of Scientific Racism such as Colons measuring African's skull to justify the Caucasian superiority (on the same wiki page) then be my guest.

In any case I really in courage you to check the picture with "High IQ" and "Terrorist" on https://www.faception.com/ . Maybe you'll better understand crazy those people are and how dangerous it can be for society


This study proves that the liberal surprise face meme is real.


Who wants to take bets on how long before the paper gets cancelled and forced retraction?


It is like saying - My intuition usually works great. 78% of the times I have been successful with my assumptions/speculations.


Generally speaking, these sorts of things don't make any sense to me. People change as they age, as they get new experiences, when they live in different places. How can someone's face tell you what their political orientation is?

When I lived in Texas, I was considered a liberal. When I moved to Chicago, I was considered a conservative. When I moved to Seattle, I was considered a conservative. When I moved to the desert southwest, I was considered a liberal. Nothing about my face changed. Just my address.


Oh cool, are we going to bring back phrenology as long as its an AI powered black box. Sounds wonderful. It's going to be impossible for any of this to be abused. I say we should give AI power to a select few people who can make sweeping decisions in everyone's life. Oo Oo can we bring back eugenics too? What's wrong with an AI powered method of selecting the best genes to continue the human race. Then we can sterilize the degenerates the almighty algorithm picks out.

I swear to fuck. All this praise to repeat the sins of the 20th century all over again? But oh no, it's okay because it's done by "software engineers".

That and you "atheists techies" are just as fanatically religious as jihadists. Instead of a deity, you worship silicon valley and algorithms.

Of course tech companies can be trusted with our data.

Theres nothing to fear from putting your life on social media if you have nothing to hide.

A select few should have absolute say on what we are allowed to even consider "free speech".

Tech companies aren't in it for profit, they're in it to save the world because they're the "educated elite".

Just... why is this not being shot down? Are you that blind to where it's going to lead? Its literally the same steps every other totialirian psychopath took. Mass identification on bullshit pseudoscience. Then comes the extermination.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: