“[A] Google image recognition program labeled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.”
一Stephen Buranyi, The Guardian (2017)
Artificial intelligence (AI) is one of the most revolutionary phenomena leading the twenty-first century, and much of our society is increasingly relying on AI as a means of executing daily tasks. AI plays a significant role in our world, delegating projects from personalized web searches to autonomous vehicles. In essence, we are becoming more indulgent of AI and are endowing this momentous technology with more trust.
However, AI remains remiss in regards to cultural factors and the impacts of history. Albeit a highly advanced technology that surpasses humans in numerous everyday tasks, AI possesses a great deal of prejudice against certain groups of individuals, particularly those who have been historically disadvantaged. This phenomenon of bigoted technology ultimately stems from the biases and partial opinions that are ingrained in humans; rather than AI itself being prejudiced, we as human beings are imposing upon AI our biases, which AI naturally learns and endeavors to emulate. As a component of technology, AI serves the purposes intended by its human makers, and it simply outputs the factors and commands put into it, including our societal prejudices. A major way by which AI learns to discriminate is the use of words and language. According to a recent investigation lead by Joanna Bryson of Bath University, many systems of AI are learning to unjustly associate certain words with others through a mechanism called word embedding. With word embedding, AI learns the relationships between words by examining extensive pages of text found on the Internet and determining the words most closely related to one another based on the frequencies of nearby words that appear. Bryson and her team noted that AI systems grouped the female pronouns and names with titles such as “nurse” and “receptionist” while grouping male counterparts with titles such as “CEO” and “physician.” These results outraged Bryson and her team but were not a notable surprise. The predominant vision of a CEO in the western world is a white male of middle age, while the majority of our population believes the occupation of a nurse to be specialized for women.
Moreover, in addition to gender, AI is showing discrimination on the basis of race, especially in the justice system. For instance, in 2016 Kristian Lum of the non-profit Human Rights Data Analysis Group (HRDAG) demonstrated this very fact with her investigation of a program called PredPol, which predicts the locations of future crimes based on statistics from previous crimes and arrests. Lum inputted historical data of drug crimes in the city of Oakland, California, and discovered that the program frequently suggested areas with a majority of African American individuals at approximately twice the rate at which it suggested areas with majority white populations, thereby leading to inordinate targeting of African American populations in the city, which policemen may justify by stating that they were simply following the suggestions of this well-trusted computer program.
Society continues to advance at an exponential rate as technology expands in both complexity and ubiquity. Conversely, society will indeed remain at a standstill if it does not expand its efforts in social justice toward technology. To combat the growing fear of AI harming our most vulnerable populations, the creation of governmental regulations on technology’s ability to discriminate against certain individuals is imperative. Numerous laws exist that prohibit individuals from exerting overt prejudice against others in the workplace, justice system, and in schools; such laws must be extended to the increasingly sovereign forms of AI and people’s uses of this technology. Furthermore, as white males are currently the main developers of AI, much of AI is structured upon their personal prejudices and biases, so an increase in the diversity of the software engineering field will truly aid in the social equity that AI desperately requires. With this respect in mind, we may all benefit from this rapidly advancing technology that continues to have an immense impact on our lives.
Literature Cited:
- Buranyi, Stephen. “Rise of the Racist Robots – How AI Is Learning All Our Worst Impulses.”The Guardian, Guardian News and Media, 8 Aug. 2017.
Devlin, Hannah. “AI Programs Exhibit Racial and Gender Biases, Research Reveals.” The Guardian, Guardian News and Media, 13 Apr. 2017.
Leave a Reply
You must be logged in to post a comment.