The 2019 schedule is still incomplete Program Highlights »
Workshop
Mon May 6th 09:00 AM -- 06:00 PM @ Room R3
Debugging Machine Learning Models
Julius Adebayo · Himabindu Lakkaraju · Sarah Tan · Rich Caruana

Machine learning (ML) models are increasingly being employed to make highly consequential decisions pertaining to employment [Dastin, 2018], bail [Kleinberg et. al., 2017], parole [Dressel and Farid, 2018], and lending [Hurley et al., 2016]. While such models can learn from large amounts of data and are often very scalable, their applicability is limited by certain safety challenges. A key challenge is to be able to identify and correct systematic patterns of mistakes made by ML models before deploying them in the real world.

In order to address the aforementioned challenge, machine learning can potentially take cues from traditional software engineering literature, which has put significant emphasis on the development of rigorous tools for debugging and formal methods for program verification. While these methods are by no means complete or foolproof, there is ample evidence that they help in developing reliable and robust software [D’Silva et. al., 2008]. ML pipelines currently lack analogous infrastructure [Breck et. al. 2016] and it would be interesting to explore how to address this shortcoming. Furthermore, some recent research in machine learning has focussed on developing methods and tools for testing and verifying model violations to fairness, robustness, and security constraints [Cotter et. al. 2018, Dvijotham et. al. 2018, Kearns et. al. 2017, Odena et. al. 2018, Selsam et. al. 2017, Stock et. al. 2018, Tian et. al. 2017, Wicker et. al. 2017]. For example, interpretable models have been proposed to detect misclassifications and dataset biases [Koh and Liang, 2017; Kim et al., 2018; Lakkaraju et. al., 2017; Zhang et al., 2018]. The field of adversarial learning has proposed techniques which leverage the process of generation of adversarial examples (and defenses against them) to highlight vulnerabilities in ML models [Goodfellow et. al., 2014, Elsayed et. al., 2018]. Several of the aforementioned research topics have their own longstanding workshops. Yet, to the best of our knowledge, there has not been a single workshop that brings together researchers (spanning all the aforementioned topics) working on the common theme of debugging ML models.

The goal of this workshop is to bring together researchers and practitioners interested in research problems and questions pertaining to the debugging of machine learning models. For the first edition of this workshop, we intend to focus on research that approaches the problem of debugging ML models from the following perspectives:

• Interpretable and explainable ML
• Formal methods and program verification
• Visualization and human factors
• Security in ML

By bringing together researchers and practitioners working in the aforementioned research areas, we hope to address several key questions pertaining to model debugging (some of which are highlighted below) and facilitate an insightful discussion about the strengths and weaknesses of existing approaches:

• How can interpretable models and techniques aid us in effectively debugging ML models?
• Are existing program verification frameworks readily applicable to ML models? If not, what are the gaps that exist and how do we bridge them?
• What kind of visualization techniques would be most effective in exposing vulnerabilities of ML models?
• How can we leverage techniques and insights from the emerging area of adversarial machine learning to debug ML models?
• What are some of the effective strategies for involving humans in the loop for debugging ML models?
• Can we take cues from statistical considerations such as multiple testing, uncertainty, and false discovery rate control in order to ensure that debugging methodologies and tools actually detect ‘true’ errors of the model being examined?

Target Audience:

We anticipate this workshop to be of interest and utility to researchers in at least four different research areas that we have focused our workshop agenda on. Since there will be contributed posters and talks from students, we expect a good number of young researchers to attend. Additionally, we expect two components of our agenda -- the opinion piece and the panel -- to generate a lot of excitement and debate in the research community.

Estimated attendance for the workshop ~ 100 participants

Funding:

We are applying for grants to fund the coffee breaks, and provide travel grants for invited speakers and student presenters. These include grants from Microsoft, Future Life Institute, Harvard data Science Initiative, MIT CSAIL Initiative, and the Open Philanthropy Project.

References:

[Breck et. al. 2017] Eric Breck, Shanqing Cai, Eric Nielsen, Michael Salib, and D Sculley. The ml test score: A rubric for ml production readiness and technical debt reduction. In Big Data (Big Data), 2017 IEEE International Conference on, pages 1123–1132. IEEE, 2017.

[Cotter et. al. 2018] Andrew Cotter, Heinrich Jiang, Serena Wang, Taman Narayan, Maya Gupta, Seungil You, and Karthik Sridharan. Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals. arXiv preprint arXiv:1809.04198 , 2018a.

[Dastin, 2018] Jeffrey Dastin. Amazon scraps secret ai recruiting tool that showed bias against women,Oct 2018. URL https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/
amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

[D’silva et. al. 2008] Vijay D’silva, Daniel Kroening, and Georg Weissenbacher. A survey of automated techniques for formal software verification. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 27(7):1165–1178, 2008.

[Dressel and Farid, 2018] Julia Dressel and Hany Farid. The accuracy, fairness, and limits of predicting recidivism. Science advances,4(1):eaao5580, 2018.

[Dvijotham et. al. 2018] Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, and Pushmeet Kohli. A dual approach to scalable verification of deep networks. arXiv preprint arXiv:1803.06567 , 2018.

[Elsayed et. al. 2018] Gamaleldin F Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian Goodfellow, and Jascha Sohl-Dickstein. Adversarial examples that fool both human and computer vision. arXiv preprint arXiv:1802.08195, 2018.

[Hurley and Adebayo 2016] Mikella Hurley and Julius Adebayo. Credit scoring in the era of big data. Yale JL & Tech. , 18:148, 2016.

[Goodfellow et. al. 2014] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples (2014). arXiv preprint arXiv:1412.6572.

[Kearns et. al. 2017] Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. arXiv preprint arXiv:1711.05144 , 2017.

[Kim et al., 2018] Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, Rory Sayres. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In ICML 2018

[Kleinberg et al., 2018] Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. Human decisions and machine predictions. The quarterly journal of economics , 133(1):237–293, 2018.

[Koh and Liang, 2017] Pang-Wei Koh, Percy Liang. Understanding black-box predictions via influence functions. In ICML 2017

[Lakkaraju et. al. 2017] Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Eric Horvitz. Identifying unknown unknowns in the open world: Representations and policies for guided exploration. In AAAI, volume 1, page 2, 2017.

[Odena et. al. 2018] Augustus Odena and Ian Goodfellow. Tensorfuzz: Debugging neural networks with coverage-guided fuzzing. arXiv preprint arXiv:1807.10875, 2018.

[Selsam et al., 2017] Daniel Selsam, Percy Liang, David L. Dill. Developing Bug-Free Machine Learning Systems With Formal Mathematics. In ICML 2017

[Stock et. al. 2018] Pierre Stock, Moustapha Cisse. ConvNets and ImageNet Beyond Accuracy: Understanding Mistakes and Uncovering Biases. In ECCV 2018

[Tian et. al. 2017] Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. Deeptest: Automated testing of deep-neural-network-driven autonomous cars. CoRR, abs/1708.08559, 2017. URL http: //arxiv.org/abs/1708.08559.

[Wicker et. al. 2017] Matthew Wicker, Xiaowei Huang, and Marta Kwiatkowska. Feature-guided black-box safety testing of deep neural networks. CoRR, abs/1710.07859, 2017. URL http://arxiv.org/ abs/1710.07859

[Zhang et al., 2018] Xuezhou Zhang and Xiaojin Zhu and Stephen Wright. Training Set Debugging Using Trusted Items. In AAAI 2018