MedicalResearch.com Interview with:
Aylin Caliskan PhD
Center for Information Technology Policy
Princeton University, Princeton, NJ
MedicalResearch.com: What is the background for this study? What are the main findings?
Response: Researchers have been suggesting that artificial intelligence (AI) learns stereotypes, contrary to the common belief that AI is neutral and objective. We present the first systematic study that quantifies cultural bias embedded in AI models, namely word embeddings.
Word embeddings are dictionaries for machines to understand language where each word in a language is represented by a 300 dimensional numeric vector. The geometric relations of words in this 300 dimensional space make it possible to reason about the semantics and grammatical properties of words. Word embeddings represent the semantic space by analyzing the co-occurrences and frequencies of words from billions of sentences collected from the Web. By investigating the associations of words in this semantic space, we are able to quantify how language reflects cultural bias and also facts about the world.