“Caras vemos, corazones no sabemos”

Physiognomy’s New Clothes

by Blaise Agüera y Arcas, Margaret Mitchell and Alexander Todorov

 

https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a 

“Caras vemos, corazones no sabemos” (Faces we see, hearths we don’t know) Popular proverb.

“The idea that there is a perfect correspondence between a person and their image is a psychological illusion fueled by our experience with familiar faces. We instantly recognize images of familiar people, and this recognition evokes our memories and feelings about them. But there is no equivalent process when we look at images of strangers. Each image generates a different and arbitrary impression.”

The misperception of the gestures of people can lead to discrimination and mistakes made by us, common pedestrian, by the police force, by judges, by hiring companies, visa expedition process, people responsible in universities in the admissions process and many other fields.

There are many examples of this kind of mistakes. A few weeks ago I was labeled as a criminal suspect while walking home. I was walking home after eleven p.m. I was walking in the same direction as a young lady of about 25-35 years old, who I never saw her face. She was walking in the same direction as I was, and also at a similar pace. She was about 15 meters ahead of me. When she noticed my presence she started looking over her shoulder very frequently. So it was clear she assumed I represented a danger to her. I felt very bad when this happened. I understand how women feel unsafe, and off course I didn’t take this personally. I have also done the same many times in my country, where you can easily get robbed in the streets. The conclusion is that this misunderstanding is very common, and if made by the authorities or people with guns, the consequences are very bad.

The problem with making assumptions about people based on their appearance is a dangerous mistake. In different cases injustices have been perpetrated because of this. Judges and jury often make decisions that change individual lives, based on how these people look, and not only on what their story is, their context they came from or their psychological profile. Criminals responsible for the same offenses often get different sentences because of how they look. Black people are common victims of this kind of injustice. It’s very common that they get more time in jail when they are sentenced for the same crimes as white people. Also when it comes for choosing between the death penalty or the life imprisonment, decisions are affected not only on the profile and expedient of the inmates but on how their faces reveal a more “dangerous nature” or “evil” one.

If we are not conscious of how we are judging people by their looks, expressions, color of their skin, the way they dress, if they have tattoos or not, their size and weight, etc., we will be training machines to perpetuate this vicious way of judging, leading to more injustice and inequality. This is very present in our human nature. The popular sentence “Tell me who you hang out with, and I’ll tell you who you are” reveals exactly that tendency.

“Our existing implicit biases will be legitimized, normalized, and amplified.”

That could happen if we are careless when we train A.I to make judgments like we do and allow computers to perpetuate our own biases and use them as we enforce them as valid and scientific fact based.

The following case is another example that reveals the dangers of using these systems of machine learning and so-called objectivity.

Predictive policing” (listed as one of TIME Magazine’s 50 best inventions of 2011) is an early example of such a feedback loop. The idea is to use machine learning to allocate police resources to likely crime spots. Believing in machine learning’s objectivity, several US states implemented this policing approach. However, many noticed that the system was learning from previous data. If police were patrolling black neighborhoods more than white neighborhoods, this would lead to more arrests of black people; the system then learns that arrests are more likely in black neighborhoods, leading to reinforcement of the original human bias. It does not result in optimal policing with respect to actual incidence of crime.”

The idea of physiognomy has survived the test of time and is a treat in our present time. The belief of the facial forms and expressions of people having a correlation with their moral qualities is a misconception that often leads to discrimination and injustice. There is a study that is taking this system of thought, and using machine learning to reinforce it, claiming accuracy and objectivity. It’s very dangerous because the law and authorities could agree on its validity and that way implements it as a tool to make decisions that will affect many peoples life.

“Wu and Zhang are able to use a variety of techniques to explore this in detail. This is especially tractable for the simpler machine learning approaches that involve measuring relationships between standard facial landmarks. They summarize,

“[…] the angle θ from nose tip to two mouth corners is on average 19.6% smaller for criminals than for non-criminals and has a larger variance. Also, the upper lip curvature ρ is on average 23.4% larger for criminals than for noncriminals. On the other hand, the distance d between two eye inner corners for criminals is slightly narrower (5.6%) than for non-criminals.” [7]

We may be able to get an intuitive sense of what this looks like by comparing the top row of “criminal” examples with the bottom row of “non-criminal” examples, shown in the paper’s Figure 1:

 

Figure 3. Wu and Zhang’s “criminal” images (top) and “non-criminal” images (bottom). In the top images, the people are frowning. In the bottom, they are not. These types of superficial differences can be picked up by a deep learning system.

 

Figure 4. Stereotypically “nice” (left) and “mean” (right) faces, according to both children and adults.

Another interesting case was the misconception of women being bad at math’s, during the nineteen century and before. Philippa Fawcett managed to get the top score at an advanced math exam, “Cambridge Mathematical Tripos exam”, in 1890 in England. This was perceived, assumed as an error, an exception to the rule. People’s beliefs in Victorian England could not believe and accept that the elite of the British gentleman where beaten by a very intelligent woman. If we think about this case in terms of numbers, we can get their point. As there were practically no women attending universities and academic contexts, there was no data that could reveal the skills or lack of skills of women in math’s, as a group. There could be statistics without numbers. So when this result was obtained, it was seen as an anomaly.

This could happen if we trained a computer without the amount of information we need it to have, in order to get results that come close to the average, what we sometimes call the truth. If these men had allowed more women to learn math’s and to take exams, they would have thought in a very different way as they did. If we think of them as the computer making decisions, and having to choose one person for a job that required high mathematical skills among the people doing the exam, the computer would have chosen any other than the most suited one for the job.

Leave a Reply

Your email address will not be published. Required fields are marked *