Workshop
Mon May 6th 09:45 AM -- 06:30 PM @ Room R03
Debugging Machine Learning Models
Julius Adebayo · Himabindu Lakkaraju · Sarah Tan · Rich Caruana · D. Sculley · Jacob Steinhardt





See the workshop website (https://debug-ml-iclr2019.github.io/) for accepted posters, demos, and other info.

------------------------------

Machine learning (ML) models are increasingly being employed to make highly consequential decisions pertaining to employment [Dastin, 2018], bail [Kleinberg et. al., 2017], parole [Dressel and Farid, 2018], and lending [Hurley et al., 2016]. While such models can learn from large amounts of data and are often very scalable, their applicability is limited by certain safety challenges. A key challenge is to be able to identify and correct systematic patterns of mistakes made by ML models before deploying them in the real world.

In order to address the aforementioned challenge, machine learning can potentially take cues from traditional software engineering literature, which has put significant emphasis on the development of rigorous tools for debugging and formal methods for program verification. While these methods are by no means complete or foolproof, there is ample evidence that they help in developing reliable and robust software [D’Silva et. al., 2008]. ML pipelines currently lack analogous infrastructure [Breck et. al. 2016] and it would be interesting to explore how to address this shortcoming. Furthermore, some recent research in machine learning has focussed on developing methods and tools for testing and verifying model violations to fairness, robustness, and security constraints [Cotter et. al. 2018, Dvijotham et. al. 2018, Kearns et. al. 2017, Odena et. al. 2018, Selsam et. al. 2017, Stock et. al. 2018, Tian et. al. 2017, Wicker et. al. 2017]. For example, interpretable models have been proposed to detect misclassifications and dataset biases [Koh and Liang, 2017; Kim et al., 2018; Lakkaraju et. al., 2017; Zhang et al., 2018]. The field of adversarial learning has proposed techniques which leverage the process of generation of adversarial examples (and defenses against them) to highlight vulnerabilities in ML models [Goodfellow et. al., 2014, Elsayed et. al., 2018]. Several of the aforementioned research topics have their own longstanding workshops. Yet, to the best of our knowledge, there has not been a single workshop that brings together researchers (spanning all the aforementioned topics) working on the common theme of debugging ML models.

The goal of this workshop is to bring together researchers and practitioners interested in research problems and questions pertaining to the debugging of machine learning models. For the first edition of this workshop, we intend to focus on research that approaches the problem of debugging ML models from the following perspectives:

• Interpretable and explainable ML
• Formal methods and program verification
• Visualization and human factors
• Security and adversarial examples in ML

By bringing together researchers and practitioners working in the aforementioned research areas, we hope to address several key questions pertaining to model debugging (some of which are highlighted below) and facilitate an insightful discussion about the strengths and weaknesses of existing approaches:

• How can interpretable models and techniques aid us in effectively debugging ML models?
• Are existing program verification frameworks readily applicable to ML models? If not, what are the gaps that exist and how do we bridge them?
• What kind of visualization techniques would be most effective in exposing vulnerabilities of ML models?
• What are some of the effective strategies for using human input and expertise for debugging ML models?
• How do we design adversarial attacks that highlight vulnerabilities in the functionality of ML models?
• How do we provide guarantees on the correctness of proposed debugging approaches? Can we take cues from statistical considerations such as multiple testing and uncertainty to ensure that debugging methodologies and tools actually detect ‘true’ errors?
• Given a ML model or system, how do we bound the probability of its failures?
• What can we learn from the failures of widely deployed ML systems? What can we say about debugging for different types of biases, including discrimination?
• What are standardized best practices for debugging large-scale ML systems? What are existing tools, software, and hardware, and how might they be improved?
• What are domain-specific nuances of debugging ML models in healthcare, criminal justice, public policy, education, and other social good applications?

Target Audience:
We anticipate this workshop to be of interest and utility to researchers in at least four different research areas that we have focused our workshop agenda on. Since there will be contributed posters and talks from students, we expect a good number of young researchers to attend. Additionally, we expect two components of our agenda -- the opinion piece and the panel -- to generate a lot of excitement and debate in the research community.

09:50 AM Opening (Remarks)
10:00 AM A New Perspective on Adversarial Perturbations (Invited Talk)
Aleksander Madry
10:30 AM Similarity of Neural Network Representations Revisited (Contributed Talk)
Simon Kornblith
10:40 AM Error terrain analysis for machine learning: Tool and visualizations (Contributed Talk)
Besmira Nushi
10:50 AM Coffee break (Break)
11:10 AM Verifiable Reinforcement Learning via Policy Extraction (Invited Talk)
Osbert Bastani
11:40 AM Debugging Machine Learning via Model Assertions (Contributed Talk)
Daniel Kang
11:50 AM Improving Jobseeker-­Employer Match Models at Indeed Through Process, Visualization, and Exploration (Contributed Talk)
Benjamin Link
12:00 PM Break
12:10 PM Discovering Natural Bugs Using Adversarial Data Perturbations (Invited Talk)
Sameer Singh
12:40 PM "Debugging" Discriminatory ML Systems (Invited Talk)
Deborah Raji
01:00 PM NeuralVerification.jl: Algorithms for Verifying Deep Neural Networks (Contributed Talk)
Tomer Arnon, Chris Lazarus
01:10 PM Lunch (Break)
03:20 PM Welcome back (Remarks)
03:30 PM Safe and Reliable Machine Learning: Preventing and Identifying Failures (Invited Talk)
Suchi Saria
04:00 PM Better Code for Less Debugging with AutoGraph (Invited Talk)
Dan Moldovan
04:20 PM Posters & Demos & Coffee break (Poster & Demo Session)
05:20 PM The Scientific Method in the Science of Machine Learning (Contributed Position Paper)
Michela Paganini
05:30 PM Don’t debug your black box, replace it (Invited Opinion Piece)
Cynthia Rudin
06:00 PM Panel: The Future of Debugging (Panel and Q&A)
Hima Lakkaraju Lakkaraju, Aleksander Madry, Cynthia Rudin, Dan Moldovan, Deborah Raji, Osbert Bastani, Sameer Singh, Suchi Saria
06:25 PM Closing (Remarks)