Workshop
Mon May 06 07:45 AM -- 04:30 PM (PDT) @ Room R03
Debugging Machine Learning Models
Julius Adebayo · Himabindu Lakkaraju · Sarah Tan · Rich Caruana · Jacob Steinhardt · D. Sculley
See the workshop website (https://debug-ml-iclr2019.github.io/) for accepted posters, demos, and other info.
------------------------------
Machine learning (ML) models are increasingly being employed to make highly consequential decisions pertaining to employment [Dastin, 2018], bail [Kleinberg et. al., 2017], parole [Dressel and Farid, 2018], and lending [Hurley et al., 2016]. While such models can learn from large amounts of data and are often very scalable, their applicability is limited by certain safety challenges. A key challenge is to be able to identify and correct systematic patterns of mistakes made by ML models before deploying them in the real world.
In order to address the aforementioned challenge, machine learning can potentially take cues from traditional software engineering literature, which has put significant emphasis on the development of rigorous tools for debugging and formal methods for program verification. While these methods are by no means complete or foolproof, there is ample evidence that they help in developing reliable and robust software [D’Silva et. al., 2008]. ML pipelines currently lack analogous infrastructure [Breck et. al. 2016] and it would be interesting to explore how to address this shortcoming. Furthermore, some recent research in machine learning has focussed on developing methods and tools for testing and verifying model violations to fairness, robustness, and security constraints [Cotter et. al. 2018, Dvijotham et. al. 2018, Kearns et. al. 2017, Odena et. al. 2018, Selsam et. al. 2017, Stock et. al. 2018, Tian et. al. 2017, Wicker et. al. 2017]. For example, interpretable models have been proposed to detect misclassifications and dataset biases [Koh and Liang, 2017; Kim et al., 2018; Lakkaraju et. al., 2017; Zhang et al., 2018]. The field of adversarial learning has proposed techniques which leverage the process of generation of adversarial examples (and defenses against them) to highlight vulnerabilities in ML models [Goodfellow et. al., 2014, Elsayed et. al., 2018]. Several of the aforementioned research topics have their own longstanding workshops. Yet, to the best of our knowledge, there has not been a single workshop that brings together researchers (spanning all the aforementioned topics) working on the common theme of debugging ML models.
The goal of this workshop is to bring together researchers and practitioners interested in research problems and questions pertaining to the debugging of machine learning models. For the first edition of this workshop, we intend to focus on research that approaches the problem of debugging ML models from the following perspectives:
• Interpretable and explainable ML
• Formal methods and program verification
• Visualization and human factors
• Security and adversarial examples in ML
By bringing together researchers and practitioners working in the aforementioned research areas, we hope to address several key questions pertaining to model debugging (some of which are highlighted below) and facilitate an insightful discussion about the strengths and weaknesses of existing approaches:
• How can interpretable models and techniques aid us in effectively debugging ML models?
• Are existing program verification frameworks readily applicable to ML models? If not, what are the gaps that exist and how do we bridge them?
• What kind of visualization techniques would be most effective in exposing vulnerabilities of ML models?
• What are some of the effective strategies for using human input and expertise for debugging ML models?
• How do we design adversarial attacks that highlight vulnerabilities in the functionality of ML models?
• How do we provide guarantees on the correctness of proposed debugging approaches? Can we take cues from statistical considerations such as multiple testing and uncertainty to ensure that debugging methodologies and tools actually detect ‘true’ errors?
• Given a ML model or system, how do we bound the probability of its failures?
• What can we learn from the failures of widely deployed ML systems? What can we say about debugging for different types of biases, including discrimination?
• What are standardized best practices for debugging large-scale ML systems? What are existing tools, software, and hardware, and how might they be improved?
• What are domain-specific nuances of debugging ML models in healthcare, criminal justice, public policy, education, and other social good applications?
Target Audience:
We anticipate this workshop to be of interest and utility to researchers in at least four different research areas that we have focused our workshop agenda on. Since there will be contributed posters and talks from students, we expect a good number of young researchers to attend. Additionally, we expect two components of our agenda -- the opinion piece and the panel -- to generate a lot of excitement and debate in the research community.