Alt-robot: Human prejudice spreading to AI, new study finds
The better AI becomes at interpreting the human language, the more likely it is to adopt human bias, according to new research by scientists at Princeton University published in the journal Science.
Researchers fed words into GloVe (Global Vectors for Word Representation), an open-source learning algorithm that processes words from a collection of 840 billion before pairing them based on their clusters.
A word like “flower” was found to be linked to words associated with pleasantness, while “insect” was found to be associated with unpleasantness.
Bias became visible when the words “female” and “woman” returned associations with arts and humanities occupations as well as the home. “Male” and “man” were associated with maths and engineering roles.
European-American names were also found to be associated with pleasant terms while African American names returned unpleasant terms.
“A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it,” co-author of the study Joanna Bryson told The Guardian.
Bryson warned that addressing the issue could be problematic as AI cannot consciously counteract learned bias.
“A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.
GloVe draws on the Common Crawl corpus, a repository of data collected online over eight years with insights into politics, art and popular culture for its data vectors.
“The world is biased, the historical data is biased, hence it is not surprising that we receive biased results,” Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford said.
Wachter warned that eliminating bias by reducing their powers of interpretation could be problematic but that it is a “responsibility that we as society should not shy away from.”