AI Can Be Embedded With Universally Accepted Human Biases

MedicalResearch.com Interview with:

Aylin Caliskan PhD Center for Information Technology Policy Princeton University, Princeton, NJ

Dr. Caliskan

Aylin Caliskan PhD
Center for Information Technology Policy
Princeton University, Princeton, NJ

MedicalResearch.com: What is the background for this study? What are the main findings?

Response: Researchers have been suggesting that artificial intelligence (AI) learns stereotypes, contrary to the common belief that AI is neutral and objective. We present the first systematic study that quantifies cultural bias embedded in AI models, namely word embeddings.

Word embeddings are dictionaries for machines to understand language where each word in a language is represented by a 300 dimensional numeric vector. The geometric relations of words in this 300 dimensional space make it possible to reason about the semantics and grammatical properties of words. Word embeddings represent the semantic space by analyzing the co-occurrences and frequencies of words from billions of sentences collected from the Web. By investigating the associations of words in this semantic space, we are able to quantify how language reflects cultural bias and also facts about the world.

MedicalResearch.com: What should readers take away from your report?

Response: Word embeddings form the foundation of many text related programs or applications such as Web search, machine translation, and auto-fill. Essentially, they end up being used by billions of people everyday. Being able to uncover bias that is embedded in such a model of human language has several implications. Does AI perpetuate bias? If AI learned bias from human language, is it possible that humans end up making biased associations because of the language they have been exposed to? How do we deal with bias in AI? How can other researchers benefit from our methods that detect and quantify bias?

Before trying to answer these questions, let’s understand how our first method WEAT detects and quantifies bias. WEAT calculates the mathematical distance between two sets of words that represent two societal groups and two sets of words for stereotype terms in order to uncover if a specific group is associated with a stereotype. The sets of words used in these experiments have been taken by the Implicit Association Test (IAT) that was introduced by Greenwald et al. in 1998 and has been taken by millions of people from all over the world. We used these former IATs as our baseline and were able to detect every single bias we tested for.

In the original IAT, which inspired me to come up with WEAT, human subjects are asked to categorize words that represent a certain group of people with stereotypical terms. For example, are humans faster in associating women with family and men with career, or the opposite? The reaction times for grouping words together makes it possible to quantify implicit associations through the latency paradigm. In WEAT, we replace the latency paradigm with mathematical distance between the numeric vectors of words. Consequently, we adapt the IAT to machines and test the aggregate language of humans for bias with WEAT. Our experimental findings showed universally accepted bias such as the attitude towards flowers and insects, stereotypes that can be harmful to society such as racial bias, gender bias, bias against the elderly or mental disease.

Once we replicated these biases we realized how powerful the word embeddings are in representing facts about culture and then we wanted to test statistical facts by using language as a proxy. United States bureau of labor statistics annually publishes percentage of women in each occupation. After taking the most common 50 occupation names and calculating their association with being male or female with our second method WEFAT, we obtained 90% correlation with the actual statistics. Being a programmer or doctor is mostly associated with men whereas being a librarian or nurse is mostly associated with women. Even though linking men to more prestigious jobs have been considered a bias, it reflects occupational statistics.

Removing bias from human data or removing bias quantitatively from machine learning models is not the solution for generating unbiased outputs. We need guarantees that the decisions are unbiased but also accurate. Removing bias from data or the model will break the correct representation of the world and consequently artificial intelligence will not be as accurate. On the other hand, artificial intelligence carries facts about the world. Accordingly, we suggest that an expert human in the loop tries to guarantee that the outputs or decisions of AI are unbiased.

MedicalResearch.com: What recommendations do you have for future research as a result of this study?

Response: Our tools and methods can be a way for sociologist to study how bias emerges and evolves. Psychologists can use these methods to uncover new types of bias. Linguists can try to use these methods to analyze the evolution of language and causality between bias and language. Policymakers and ethicists can work on finding explicit ways of fighting bias that might be embedded in AI that is becoming part of our every day lives. These findings also bring awareness and suggest the need for a long-term research agenda for computer scientists on fairness and transparency. Awareness is better than avoidance.

MedicalResearch.com: Is there anything else you would like to add?

Response: There has been debate around the validity of IAT and how this applies to our method WEAT that has been inspired by the IAT. Unlike IAT, our method is applied to learned models. These models, products of artificial intelligence, are based on statistical patterns and our statistical method WEAT mathematically analyzes these models. Accordingly, WEAT and these models produce deterministic output that will not change unless the model is trained on different data.

MedicalResearch.com: Thank you for your contribution to the MedicalResearch.com community.

Citation: Semantics Derived Automatically From Language Corpora Contain Human-Like Biases
A Caliskan et al. Science 356 (6334), 183-186. 2017 Apr 14.

Note: Content is Not intended as medical advice. Please consult your health care provider regarding your specific medical condition and questions.

More Medical Research Interviews on MedicalResearch.com

[wysija_form id=”5″]

Last Updated on April 22, 2017 by Marie Benz MD FAAD