I invited Aylin Caliskan to join us from Computer Science @ University of Maryland this coming Monday, June 3, from 3-4:30pm at the Franke (Regenstein S118 …right across from the book return slot) to discuss her excellent work on Algorithmic Bias (published in Science), demonstrating that the machines we train to do our work from our stories pick up our social and cultural biases and propagate them throughout the world as sexist, racist and otherwise naturally prejudiced robots. How should we think about (and change) the future this implies?
Algorithmic Mirrors of Human Biases
Assistant Professor, Computer Science, George Washington University
MONDAY, JUNE 3, 2019
3:00 - 4:30 P.M.
Franke Institute - Regenstein Library - JRL S118
As data-driven machine learning brings forth a plethora of challenges, analysis of machines trained on internet-scale linguistic data reveal the inheritance of human-like biases. This talk introduces a method, the Word Embedding Association Test, that investigates biases embedded in language models that are trained on billions of sentences collected from the World Wide Web.
“Algorithms, Models, and Formalisms”
a Mellon Foundation Project at the Franke Institute for the Humanities
Persons with disabilities who need an accommodation in order to participate
in this event should contact 773.702.8274 in advance.