Workshop

Science and Engineering of Deep Learning

Levent Sagun, Caglar Gulcehre, Adriana Romero, Negar Rostamzadeh, Stefano Sarao Mannelli, Lenka Zdeborova, Samy Bengio

Abstract:

We aim to create a venue where we discuss seemingly contrasting challenges in machine learning research and their consequences. We invite researchers to discuss the boundaries between science and engineering, the implications of having blurred boundaries, and their potential consequences in areas of life beyond research.

We organized the first ``Science meets Engineering in Deep Learning'' workshop at NeurIPS 2019, which aimed to identify the potential boundaries between science and engineering and the role of theoretically driven and application-driven research in deep learning. The workshop's discussions highlighted how intertwined science and engineering are and emphasized the benefits of their symbiotic relationship to push the boundaries of both theoretically driven and application-driven research. To highlight the communication channel we aimed to build, we chose "Science meets Engineering'' in the title for the first iteration of the workshop.

Since then, such boundaries appear harder and harder to draw, and it becomes increasingly clear that we need to agree on a set of values that define us as a community, and that will shape our future research. In particular, we envision that such values will help (1) emphasize important engineering and scientific practices that we should foster to increase the robustness of our research, (2) acknowledge the broader impact of our research, and (3) abide by ethical standards.

Reflecting this shift in perspective, this year's proposed title is "Science and Engineering of Deep Learning''. With this in mind, we are proposing the second iteration of the workshop for ICLR 2021, focusing on the core themes mentioned above. In particular, we would like to ask (1) "What are the scientific and engineering practices that we should promote as a community?" and "How do those interact?", and (2) "What is the broader impact of such adopted scientific and engineering practices?"

https://sites.google.com/view/sedl-workshop

Chat is not available.

Timezone: »

Schedule

Fri 2:40 a.m. - 2:45 a.m.
Opening remarks
Negar Rostamzadeh
Fri 2:45 a.m. - 3:05 a.m.
  

Samuel J Bell (University of Cambridge); Onno P Kampman (University of Cambridge)

In the early 2010s, a crisis of reproducibility rocked the field of psychology.

Following a period of reflection, psychology has responded with radical reform of its scientific practices. More recently, similar questions about the reproducibility of machine learning research have also come to the fore. In this short paper, we bring a novel perspective to this discussion.We present select ideas from the discipline of psychology, translating them into relevance for a machine learning audience. Whether we seek to build machine learning systems or to understand them, we can all learn from psychology's experience.

Samuel J Bell
Fri 3:05 a.m. - 3:25 a.m.
  

Jessica Forde (Brown University); A. Feder Cooper (Cornell University); Michael L. Littman (Brown University)

Algorithmic fairness has emphasized the role of biased data in unfair automated decision outcomes. Recently, there has been a shift in attention to sources of bias that implicate fairness in other stages in the ML pipeline. We contend that one source of such bias, human preferences in model selection, remains under-explored in terms of its role in disparate impact across demographic groups. Using a deep learning on real-world medical imaging data, we verify our claim empirically and argue that commonly-used benchmark datasets can conceal this issue.

Jessica Forde, A. Feder Cooper
Fri 3:25 a.m. - 3:45 a.m.
  

Harshay Shah (Microsoft Research); Prateek Jain (Google ); Praneeth Netrapalli (Microsoft Research)

Interpretability methods that seek to explain instance-specific model predictions [Simonyan et al. 2014, Smilkov et al. 2017] are often based on the premise that the magnitude of input-gradient---gradient of the loss with respect to input---highlights discriminative features that are relevant for prediction over non-discriminative features that are irrelevant for prediction. In this work, we introduce an evaluation framework to study this hypothesis for benchmark image classification tasks, and make two surprising observations on CIFAR-10 and Imagenet-10 datasets: (a) contrary to conventional wisdom, input gradients of standard models (i.e., trained on the original data) actually highlight irrelevant features over relevant features; (b) however, input gradients of adversarially robust models (i.e., trained on adversarially perturbed data) starkly highlight relevant features over irrelevant features. To better understand input gradients, we introduce a synthetic testbed and theoretically justify our counter-intuitive empirical findings. Our observations motivate the need to formalize and verify common assumptions in interpretability, while our evaluation framework and synthetic dataset serve as a testbed to rigorously analyze instance-specific interpretability methods.

Harshay Shah
Fri 4:00 a.m. - 4:25 a.m.
S1: Adyasha Maharana (Talk)
Adyasha Maharana
Fri 4:25 a.m. - 4:50 a.m.
S1: Pushmeet Kohli (Talk)
Pushmeet Kohli
Fri 4:50 a.m. - 5:15 a.m.
S1: Joelle Pineau (Talk)   
Joelle Pineau
Fri 5:30 a.m. - 6:30 a.m.

S1 speakers: Elaine Nsoesie, Pushmeet Kohli, Joelle Pineau. Moderator: Michela Paganini

Adyasha Maharana, Pushmeet Kohli, Joelle Pineau, nafissa yakubova, Michela Paganini
Fri 6:30 a.m. - 7:00 a.m.
 link »

Poster session on gather.town at: https://eventhosts.gather.town/app/Ol5FZqpU11ewV96i/sedl2021_iclr

During the poster session we will have the following contributions:

  1. Dipam Paul (Emory University ), Alankrita Tewari* (KIIT University), Jiwoong Jeong (Emory University), and Imon Banerjee (Emory University) Boosting Classification Accuracy of Fertile Sperm Cell Images leveraging cDCGAN [poster] [paper]

  2. Harshay Shah* (Microsoft Research), Prateek Jain (Google ), and Praneeth Netrapalli (Microsoft Research) Do Input Gradients Highlight Discriminative Features? [paper] Poster session 1

  3. Yu-Lin Tsai* (National Chiao Tung University), Chia-Yi Hsu (National Yang Ming Chiao Tung University), Chia-Mu Yu (National Chiao Tung University), and Pin-Yu Chen (IBM Research) Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations [poster] [paper] Both poster sessions

  4. Arantxa Casanova* (FAIR / Mila), Michal Drozdzal (FAIR), and Adriana Romero-Soriano (FAIR) Generating unseen complex scenes: are we there yet? [video] [poster] [paper] Poster session 1

  5. Hubert HE Etienne* (Facebook AI) Solving moral dilemmas with AI to address the social implications of the Covid-19 crisis [video] [paper] Poster session 1

  6. Tiffany Cai* (Columbia University), Jonathan Frankle (MIT), David Schwab (Facebook AI Research), and Ari S Morcos (FAIR) Are all negatives created equal in contrastive instance discrimination? [video] [poster] [paper] Poster session 2

  7. Arlene E Siswanto* (MIT), Jonathan Frankle (MIT), and Michael Carbin (MIT) Examining the Role of Normalization in the Lottery Ticket Hypothesis [video] [poster] [paper] Poster session 2

  8. Namhoon Lee* (UNIST), Philip Torr (University of Oxford), and Richard Hartley (Australian National University) Optimal mini-batch size for stochastic gradient methods [poster] [paper] Poster session 1

  9. Camille Ballas* (Dublin City University), César Laurent (Mila, Université de Montréal), Thomas George (MILA, Université de Montréal), Nicolas Ballas (Facebook FAIR), Suzanne Little (Dublin City University, Ireland), and Pascal Vincent (Facebook FAIR & MILA Université de Montréal) Investigating Loss-modelling Pruning Criteria for Unstructured Pruning [video] [poster] [paper] Poster session 2

  10. Samuel J Bell* (University of Cambridge) and Onno P Kampman (University of Cambridge) Ideas for machine learning from psychology's reproducibility crisis [paper]

  11. Arlene E Siswanto* (MIT), Jonathan Frankle (MIT), and Michael Carbin (MIT) Reconciling Sparse and Structured Pruning: A Scientific Study of Block Sparsity [video] [poster] [paper] Poster session 2

  12. Jiaxin Zhang* (Oak Ridge National Laboratory) and Victor Fung (Oak Ridge National Laboratory) Efficient Inverse Learning for Materials Design and Discovery [paper] Poster session 2

  13. Rajiv Movva* (MIT), Jonathan Frankle (MIT), and Michael Carbin (MIT) Studying the Consistency and Composability of Lottery Ticket Pruning Masks Raj Movva [video] [poster] [paper] Poster session 2

  14. Jessica Forde* (Brown University), A. Feder Cooper* (Cornell University), and Michael L. Littman (Brown University) Model Selection's Disparate Impact in Real-World Deep Learning Applications [poster] [paper] Both poster sessions

  15. Saurabh Garg* (CMU), Joshua Zhanson (Carnegie Mellon University), Emilio Parisotto (Carnegie Mellon University), Adarsh Prasad (Carnegie Mellon University), Zico Kolter (Carnegie Mellon University); Sivaraman Balakrishnan (CMU), Zachary Lipton (Carnegie Mellon University), Ruslan Salakhutdinov (Carnegie Mellon University), and Pradeep Ravikumar (Carnegie Mellon University) On Proximal Policy Optimization's Heavy-tailed Gradients [video] [poster] [paper] Poster session 2

The "*" indicates people presenting the work at the poster session. In the list you can also find at which poster session they will participate.

Fri 7:00 a.m. - 7:25 a.m.
S2: Deb Raji (Talk)   
Deborah Raji
Fri 7:25 a.m. - 7:50 a.m.
S2: Adina Williams (Talk)   
Adina Williams
Fri 7:50 a.m. - 8:15 a.m.
S2: Alex Hanna (Talk)
Alex Hanna
Fri 8:30 a.m. - 9:30 a.m.

S2 speakers: Deb Raji, Adina Williams, Alex Hanna. Moderator: Vicente Ordonez-Roman

Deborah Raji, Adina Williams, Alex Hanna, Vicente Ordonez, Emily Denton
Fri 9:30 a.m. - 10:00 a.m.
 link »

Poster session on gather.town at: https://eventhosts.gather.town/app/Ol5FZqpU11ewV96i/sedl2021_iclr

During the poster session we will have the following contributions:

  1. Dipam Paul (Emory University ), Alankrita Tewari* (KIIT University), Jiwoong Jeong (Emory University), and Imon Banerjee (Emory University) Boosting Classification Accuracy of Fertile Sperm Cell Images leveraging cDCGAN [poster] [paper]

  2. Harshay Shah* (Microsoft Research), Prateek Jain (Google ), and Praneeth Netrapalli (Microsoft Research) Do Input Gradients Highlight Discriminative Features? [paper] Poster session 1

  3. Yu-Lin Tsai* (National Chiao Tung University), Chia-Yi Hsu (National Yang Ming Chiao Tung University), Chia-Mu Yu (National Chiao Tung University), and Pin-Yu Chen (IBM Research) Formalizing Generalization and Robustness of Neural Networks to Weight Perturbations [poster] [paper] Both poster sessions

  4. Arantxa Casanova* (FAIR / Mila), Michal Drozdzal (FAIR), and Adriana Romero-Soriano (FAIR) Generating unseen complex scenes: are we there yet? [video] [poster] [paper] Poster session 1

  5. Hubert HE Etienne* (Facebook AI) Solving moral dilemmas with AI to address the social implications of the Covid-19 crisis [video] [paper] Poster session 1

  6. Tiffany Cai* (Columbia University), Jonathan Frankle (MIT), David Schwab (Facebook AI Research), and Ari S Morcos (FAIR) Are all negatives created equal in contrastive instance discrimination? [video] [poster] [paper] Poster session 2

  7. Arlene E Siswanto* (MIT), Jonathan Frankle (MIT), and Michael Carbin (MIT) Examining the Role of Normalization in the Lottery Ticket Hypothesis [video] [poster] [paper] Poster session 2

  8. Namhoon Lee* (UNIST), Philip Torr (University of Oxford), and Richard Hartley (Australian National University) Optimal mini-batch size for stochastic gradient methods [poster] [paper] Poster session 1

  9. Camille Ballas* (Dublin City University), César Laurent (Mila, Université de Montréal), Thomas George (MILA, Université de Montréal), Nicolas Ballas (Facebook FAIR), Suzanne Little (Dublin City University, Ireland), and Pascal Vincent (Facebook FAIR & MILA Université de Montréal) Investigating Loss-modelling Pruning Criteria for Unstructured Pruning [video] [poster] [paper] Poster session 2

  10. Samuel J Bell* (University of Cambridge) and Onno P Kampman (University of Cambridge) Ideas for machine learning from psychology's reproducibility crisis [paper]

  11. Arlene E Siswanto* (MIT), Jonathan Frankle (MIT), and Michael Carbin (MIT) Reconciling Sparse and Structured Pruning: A Scientific Study of Block Sparsity [video] [poster] [paper] Poster session 2

  12. Jiaxin Zhang* (Oak Ridge National Laboratory) and Victor Fung (Oak Ridge National Laboratory) Efficient Inverse Learning for Materials Design and Discovery [paper] Poster session 2

  13. Rajiv Movva* (MIT), Jonathan Frankle (MIT), and Michael Carbin (MIT) Studying the Consistency and Composability of Lottery Ticket Pruning Masks Raj Movva [video] [poster] [paper] Poster session 2

  14. Jessica Forde* (Brown University), A. Feder Cooper* (Cornell University), and Michael L. Littman (Brown University) Model Selection's Disparate Impact in Real-World Deep Learning Applications [poster] [paper] Both poster sessions

  15. Saurabh Garg* (CMU), Joshua Zhanson (Carnegie Mellon University), Emilio Parisotto (Carnegie Mellon University), Adarsh Prasad (Carnegie Mellon University), Zico Kolter (Carnegie Mellon University); Sivaraman Balakrishnan (CMU), Zachary Lipton (Carnegie Mellon University), Ruslan Salakhutdinov (Carnegie Mellon University), and Pradeep Ravikumar (Carnegie Mellon University) On Proximal Policy Optimization's Heavy-tailed Gradients [video] [poster] [paper] Poster session 2

The "*" indicates people presenting the work at the poster session. In the list you can also find at which poster session they will participate.

Fri 10:00 a.m. - 11:30 a.m.

Panelists: Danielle Belgrave, Meredith Broussard, Silvia Chiappa, Jonathan Frankle, Sandra Wachter. Moderator: Shakir Mohamed

Danielle Belgrave, Meredith Broussard, Silvia Chiappa, Jonathan Frankle, Sandra Wachter, Shakir Mohamed, Emily Dinan