Facial recognition reveals political party in troubling new research – ProWellTech
Researchers have created a machine learning system that, according to them, can determine a person’s political party, with reasonable accuracy, based only on their face. The study, from a group that also showed that sexual preference can apparently be inferred in this way, candidly addresses and carefully avoids the pitfalls of “modern phrenology,” leading to the uncomfortable conclusion that our appearance can express more personal information. we think.
The study, which appeared this week in the journal Nature Scientific Reports, was led by Michal Kosinski of Stanford University. Kosinski made headlines in 2017 with a work that found that a person’s sexual preference could be predicted from facial data.
The study drew criticism not so much for its methods, but for the very idea that something that is notionally non-physical could be detected in this way. But Kosinski’s work, as he explained then and later, was done specifically to challenge these assumptions and was as surprising and disturbing to him as it was to others. The idea was not to build some sort of gaydar AI, quite the contrary. As the team wrote at the time, it was necessary to publish to warn others that such a thing could be built by people whose interests went beyond academics:
We were really shocked by these results and spent a lot of time considering whether they should be made public. We didn’t want to allow the same risks we warn against. The ability to control when and to whom to reveal one’s sexual orientation is critical not only to one’s well-being, but also to one’s safety.
We felt there was an urgent need to make policy makers and LGBTQ communities aware of the risks they are facing. We have not created a tool to invade privacy, but rather have shown that basic and widely used methods pose a serious threat to privacy.
Similar warnings can be thrown here, because while political affiliation at least in the United States (and at least at the moment) is not as a sensitive or personal element such as sexual preference, is however sensitive and personal. Hardly a week goes by without one reading of some political or religious “dissident” or another arrested or killed. If oppressive regimes could get what passes for probable cause by saying “the algorithm has marked you as a possible extremist”, instead of intercepting messages, for example, it would make this kind of practice much easier and scalable.
The algorithm itself is not hyper-advanced technology. Kosinski’s article describes a fairly ordinary process of providing a machine learning system with images of over a million faces, collected from dating sites in the US, Canada, and the UK, as well as American Facebook users. The people whose faces were used were identified as politically conservative or liberal as part of the site questionnaire.
The algorithm was based on open source facial recognition software, and after basic processing to crop only the face (in this way no background elements creep in as factors), the faces are reduced to 2,048 scores representing various features, as with other facial recognition algorithms, these aren’t necessary intuitive things like “eyebrow color” and “nose type” but more computer-native concepts.
The system was provided with political affiliation data from the people themselves, and with that it began diligently studying the differences between the facial statistics of people who identify as conservative and those who identify as liberals. Because it turns out there are differences.
Obviously it’s not as simple as “conservatives have thicker brows” or “liberals frown more”. Nor does it come down to demographics, which would make things too easy and simple. After all, if political party identification is related to both age and skin color, it’s a simple prediction algorithm right there. But although the software mechanisms used by Kosinski are fairly standard, he has been careful to cover its basis so that this study, like the last one, cannot be dismissed as pseudoscience.
The most obvious way to deal with this problem is for the system to guess the political party of people of the same age, gender and ethnicity. The test involved being presented with two faces, one for each side, and guessing which was which. Obviously the random accuracy is 50%. Humans are not very good at this task, performing only slightly above means, around 55% accuracy.
The algorithm managed to achieve 71% accuracy in predicting the political party between two similar individuals, and 73% presented with two individuals of any age, ethnicity or gender (but it is still guaranteed that one conservative, one liberal) .
Getting three out of four might not seem like a triumph for modern AI, but considering people can barely do better than a coin toss, there seems to be something worth considering here. Kosinski was careful to cover other bases as well; this does not appear to be a statistical anomaly or an exaggeration of an isolated result.
The idea that your political party could be written on your face is unnerving, because while your political leanings are far from being the most private of information, it is also something that is reasonably considered intangible. People can choose to express their political beliefs with a hat, pin, or shirt, but they generally consider their own non-partisan face.
If you are wondering what particular facial features they are revealing, sadly the system is unable to report this. In a kind of para-study, Kosinski isolated a couple of dozen facial features (facial hair, immediacy of gaze, various emotions) and tested whether they were good predictors of politics, but none led to more than a small increase in accuracy over human chance or competence.
“Head orientation and emotional expression stood out: Liberals tended to look at the camera more directly, were more likely to express surprise and less likely to express disgust,” Kosinski wrote in the author’s notes to the newspaper. . But what they added left more than 10 percentage points of accuracy not taken into account: “This indicates that the facial recognition algorithm has found many other features that reveal the political bias.”
The instinctive defense of “this can’t be true – phrenology was snake oil” doesn’t contain much water here. It’s scary to think it’s true, but it doesn’t help us deny what could be a very important truth, since it could be used against people very easily.
As with sexual orientation research, the point here is not to create a perfect detector for this information, but to show that it can be done so that people begin to consider the dangers it creates. If, for example, an oppressive theocratic regime wanted to repress non-heterosexual people or those with a certain political inclination, this type of technology offers them a plausible technological method to do so “objectively”. And what’s more, it can be done with very little work or contact with the target audience, unlike digging into their social media story or analyzing their (even very revealing) purchases.
We have already heard of China implementing facial recognition software to find members of the Uighur religious minority at war. And in our country, even the authorities trust this type of artificial intelligence: it is not difficult to imagine that the police use the “latest technology” to, for example, classify the faces of a protest, saying that “these 10 were determined by the system how to be the most liberal “, or what you have.
The idea that a couple of researchers using open-source software and a medium-sized face database (for a government, this is trivial to assemble in the unlikely chance they don’t already have one) could do so anywhere in the world , for any purpose, it’s chilling.
“Don’t shoot the messenger,” Kosinski said. “In my work, I warn against widely used facial recognition algorithms. It’s worrying, those AI physiognomists are now being used to judge people’s intimate traits – scholars, policy makers and citizens should take note. “