news

Expressionless faces predict political beliefs, AI has amazing accuracy! Stanford research published in top international journal

2024-07-24

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina


New Intelligence Report

Editor: Peach

【New Wisdom Introduction】AI facial recognition has been fully integrated into everyone's daily life. However, a study from Stanford found that AI can identify a person's political orientation from an expressionless face with amazing accuracy.

Now, scientists have demonstrated that AI can predict a person's political orientation from their face with astonishing accuracy.

Not only that, even a face that is not "timid" at all can be accurately identified.


That being said, humans will have to restrain their little thoughts in the future, because if they are written on their faces, AI will see through them instantly.

The study, from a Stanford team, has been published in the journal American Psychologist.


Paper address: https://awspntest.apa.org/fulltext/2024-65164-001.pdf

As you might imagine, this discovery raises serious privacy concerns, especially when facial recognition is used without a person's consent.

Michal Kosinski, the author of the paper, said that growing up behind the Iron Curtain made me deeply aware of the risks of surveillance and the fact that elites choose to ignore inconvenient facts for economic or ideological reasons.

AI can understand expressionless faces

When it comes to facial recognition, everyone is probably familiar with it.

AI recognizes and authenticates individuals by analyzing patterns based on facial features.

At its core, the algorithm detects faces in images/videos and then measures various aspects of the face — such as the distance between the eyes, the shape of the jawline, and the contours of the cheekbones.


These measurements are converted into a mathematical formula, or facial signature.

The signature can be compared to a database of known faces to find a match.

Or used for a variety of applications, including security systems, unlocking mobile phones, tagging friends' photos on social media platforms, etc.

In the Stanford study, the authors focused on examining new technologies and exposing their privacy risks.

In past research, they have shown how Meta (Facebook) sells or exchanges data that reveals users' political views, sexual orientation, personality traits, and other private characteristics.


And, widely used facial recognition technology can detect political views and sexual orientation from social media profile pictures.

However, the authors point out that previous studies have not controlled for variables that may affect the accuracy of their conclusions.

For example, facial expressions, head direction, whether or not you are wearing makeup or jewelry.

Therefore, in the new study, the authors aimed to isolate the impact of facial features in predicting political leanings, thereby providing a clearer picture of the capabilities and risks of facial recognition technology.


Human and AI prediction coefficients are comparable

To accomplish this, they recruited 591 participants from a major private university and carefully controlled the circumstances and conditions in which each participant's facial photograph was taken.

Participants wore black T-shirts, used facial wipes to remove any makeup, and tied their hair neatly.

Their faces were photographed in a fixed pose against a neutral background in a well-lit room to ensure consistency across all images.


After taking the photos, the researchers processed them using a facial recognition algorithm, specifically VGGFace2 in the ResNet-50-256D architecture.

The algorithm extracts numerical vectors — called facial descriptors — from the images.

These descriptors encoded facial features in a form that could be analyzed by computers and were used to predict participants’ political leanings via a model that mapped these descriptors onto a political leaning scale.

The results showed that facial recognition algorithms can predict political inclination with a correlation coefficient of 0.22.


The correlation, while low, was statistically significant, suggesting that certain stable facial features may be associated with political leanings. This association was independent of other demographic factors such as age, gender and race.

Next, Kosinski and colleagues conducted a second study.


They replaced the algorithm with 1,026 human raters to assess whether people could also predict political leaning from neutral facial images.

The researchers showed them standardized facial images collected in the first study. Each rater was asked to rate the political leanings of the individual in the photo.

The raters completed more than 5,000 assessments, and the results were analyzed to determine how their ratings of perceived political orientation correlated with the actual orientation reported by the participants.

Like the algorithm, the human raters were able to predict political leaning, with a correlation coefficient of 0.21, which is comparable to the algorithm's performance.


“I was surprised that both algorithms and humans were able to predict political orientation from carefully standardized images of expressionless faces,” Kosinski said. “This suggests there is a link between stable facial features and political orientation.”

In a third study, the researchers extended the examination of facial recognition’s predictive power to a different context by asking the model to identify images of politicians.


The sample includes 3,401 profile images of politicians from both the upper and lower houses of the legislature of three countries: the United States, the United Kingdom, and Canada.

The results showed that the facial recognition models were indeed able to predict political leanings from images of politicians, with a correlation coefficient of 0.13 for median accuracy.

This accuracy, while modest, was still statistically significant, suggesting that some of the same stable facial features that predict political leaning in controlled laboratory images can also be identified in more diverse real-life images.


Multi-field research

In fact, as early as 2021, a Nature study pointed out that facial recognition technology can correctly predict a person's political orientation 72% of the time.


Paper address: https://www.nature.com/articles/s41598-020-79310-1

The experiment found that AI technology outperformed chance (50%), human judgment (55%) or personality questionnaires (66%).


In another 2023 study, a deep learning algorithm was also used to predict a person's political inclination from facial recognition, with an accuracy rate of up to 61%.


However, in the eyes of netizens, every AI system that claims to be able to read people's emotions or other characteristics (such as political inclinations) from their facial expressions is a scam.

There is no scientific basis for this, and therefore no “good data” that can be used to train an AI to predict these traits.


Previously, a report in WSJ also questioned such AI - using stereotyped facial expressions to train algorithms is bound to be misleading.


What do you think about this?

References:

https://www.reddit.com/r/singularity/comments/1dycnzq/ai_can_predict_political_beliefs_from/