Moderator: Hal Daume
Abstract: On December 2nd, I was fired from Google citing an email I wrote regarding the company’s treatment of women and Black people. Leading up to my firing I had been asked to retract a paper titled On the Dangers of Stochastic Parrots: Can Language Models be Too Big?, and after my firing public communication from Google Research SVP Jeff Dean claimed that this work “didn’t meet our bar for publication.” What transpired afterwards was a barrage of harassment, stalking and slurs hurled at me, my collaborators and members of the Ethical AI team, with my former co-lead Margaret Mitchell being fired on February 19. In this keynote, I will go through some of the key points in the Stochastic Parrots paper, highlighting what happened to me and my collaborators as examples of these key points. My firing follows other high profile firings of Black women such as Dr. Aysha Khoury from Kaiser Permanente School of Medicine, recruiter April Curley from Google, and most recently lecturer Khadijah Johnson from Cornell Tech for speaking up against injustice towards our communities. This is happening as machine learning based models are amplifying negative medical outcomes for Black people, and social media platforms are fueling misinformation, genocide and polarization around the world, most recently in Ethiopia where ICLR 2020 would have been held. On the other hand, leaders at various organizations continue to deny these effects while publishing works in machine learning fairness at conferences such as ICLR. The juxtaposition of what the community is condoning with respect to the real life outcomes of those who speak up against unfair practices, vs what is written in the papers, shows that the community needs to move beyond the fairness rhetoric for real tangible change. I will close by stating what this would look like in my view, drawing from works like the Abuse and Misogynoir Playbook written by Dr. Katlyn Turner, Prof. Danielle Wood and Prof. Catherine D'Ignazio (co-author of Data Feminism).