We aim to create a venue where we discuss seemingly contrasting challenges in machine learning research and their consequences. We invite researchers to discuss the boundaries between science and engineering, the implications of having blurred boundaries, and their potential consequences in areas of life beyond research.
We organized the first ``Science meets Engineering in Deep Learning'' workshop at NeurIPS 2019, which aimed to identify the potential boundaries between science and engineering and the role of theoretically driven and application-driven research in deep learning. The workshop's discussions highlighted how intertwined science and engineering are and emphasized the benefits of their symbiotic relationship to push the boundaries of both theoretically driven and application-driven research. To highlight the communication channel we aimed to build, we chose "Science meets Engineering'' in the title for the first iteration of the workshop.
Since then, such boundaries appear harder and harder to draw, and it becomes increasingly clear that we need to agree on a set of values that define us as a community, and that will shape our future research. In particular, we envision that such values will help (1) emphasize important engineering and scientific practices that we should foster to increase the robustness of our research, (2) acknowledge the broader impact of our research, and (3) abide by ethical standards.
Reflecting this shift in perspective, this year's proposed title is "Science and Engineering of Deep Learning''. With this in mind, we are proposing the second iteration of the workshop for ICLR 2021, focusing on the core themes mentioned above. In particular, we would like to ask (1) "What are the scientific and engineering practices that we should promote as a community?" and "How do those interact?", and (2) "What is the broader impact of such adopted scientific and engineering practices?"
https://sites.google.com/view/sedl-workshop
Fri 2:40 a.m. - 2:45 a.m.
|
Opening remarks
|
Negar Rostamzadeh 🔗 |
Fri 2:45 a.m. - 3:05 a.m.
|
Ideas for machine learning from psychology's reproducibility crisis
(
Contributed talk
)
SlidesLive Video » Samuel J Bell (University of Cambridge); Onno P Kampman (University of Cambridge) In the early 2010s, a crisis of reproducibility rocked the field of psychology. Following a period of reflection, psychology has responded with radical reform of its scientific practices. More recently, similar questions about the reproducibility of machine learning research have also come to the fore. In this short paper, we bring a novel perspective to this discussion.We present select ideas from the discipline of psychology, translating them into relevance for a machine learning audience. Whether we seek to build machine learning systems or to understand them, we can all learn from psychology's experience. |
Samuel J Bell 🔗 |
Fri 3:05 a.m. - 3:25 a.m.
|
Model Selection's Disparate Impact in Real-World Deep Learning Applications
(
Contributed talk
)
SlidesLive Video » Jessica Forde (Brown University); A. Feder Cooper (Cornell University); Michael L. Littman (Brown University) Algorithmic fairness has emphasized the role of biased data in unfair automated decision outcomes. Recently, there has been a shift in attention to sources of bias that implicate fairness in other stages in the ML pipeline. We contend that one source of such bias, human preferences in model selection, remains under-explored in terms of its role in disparate impact across demographic groups. Using a deep learning on real-world medical imaging data, we verify our claim empirically and argue that commonly-used benchmark datasets can conceal this issue. |
Jessica Forde · A. Feder Cooper 🔗 |
Fri 3:25 a.m. - 3:45 a.m.
|
Do Input Gradients Highlight Discriminative Features?
(
Contributed talk
)
SlidesLive Video » Harshay Shah (Microsoft Research); Prateek Jain (Google ); Praneeth Netrapalli (Microsoft Research) Interpretability methods that seek to explain instance-specific model predictions [Simonyan et al. 2014, Smilkov et al. 2017] are often based on the premise that the magnitude of input-gradient---gradient of the loss with respect to input---highlights discriminative features that are relevant for prediction over non-discriminative features that are irrelevant for prediction. In this work, we introduce an evaluation framework to study this hypothesis for benchmark image classification tasks, and make two surprising observations on CIFAR-10 and Imagenet-10 datasets: (a) contrary to conventional wisdom, input gradients of standard models (i.e., trained on the original data) actually highlight irrelevant features over relevant features; (b) however, input gradients of adversarially robust models (i.e., trained on adversarially perturbed data) starkly highlight relevant features over irrelevant features. To better understand input gradients, we introduce a synthetic testbed and theoretically justify our counter-intuitive empirical findings. Our observations motivate the need to formalize and verify common assumptions in interpretability, while our evaluation framework and synthetic dataset serve as a testbed to rigorously analyze instance-specific interpretability methods. |
Harshay Shah 🔗 |
Fri 4:00 a.m. - 4:25 a.m.
|
S1: Adyasha Maharana
(
Talk
)
|
Adyasha Maharana 🔗 |
Fri 4:25 a.m. - 4:50 a.m.
|
S1: Pushmeet Kohli
(
Talk
)
|
Pushmeet Kohli 🔗 |
Fri 4:50 a.m. - 5:15 a.m.
|
S1: Joelle Pineau
(
Talk
)
SlidesLive Video » |
Joelle Pineau 🔗 |
Fri 5:30 a.m. - 6:30 a.m.
|
Mini-Panel: Working towards DL as a methodological tool
(
Discussion panel
)
S1 speakers: Elaine Nsoesie, Pushmeet Kohli, Joelle Pineau. Moderator: Michela Paganini |
Adyasha Maharana · Pushmeet Kohli · Joelle Pineau · nafissa yakubova · Michela Paganini 🔗 |
Fri 6:30 a.m. - 7:00 a.m.
|
Break & Poster session 1
(
Poster session
)
link »
Poster session on gather.town at: [ protected link dropped ] During the poster session we will have the following contributions:
The "*" indicates people presenting the work at the poster session. In the list you can also find at which poster session they will participate. |
🔗 |
Fri 7:00 a.m. - 7:25 a.m.
|
S2: Deb Raji
(
Talk
)
SlidesLive Video » |
Inioluwa Raji 🔗 |
Fri 7:25 a.m. - 7:50 a.m.
|
S2: Adina Williams
(
Talk
)
SlidesLive Video » |
Adina Williams 🔗 |
Fri 7:50 a.m. - 8:15 a.m.
|
S2: Alex Hanna
(
Talk
)
|
Alex Hanna 🔗 |
Fri 8:30 a.m. - 9:30 a.m.
|
Mini-Panel: Social impact of ML research
(
Discussion panel
)
S2 speakers: Deb Raji, Adina Williams, Alex Hanna. Moderator: Vicente Ordonez-Roman |
Inioluwa Raji · Adina Williams · Alex Hanna · Vicente Ordonez · Emily Denton 🔗 |
Fri 9:30 a.m. - 10:00 a.m.
|
Break & Poster session 2
(
Poster session
)
link »
Poster session on gather.town at: [ protected link dropped ] During the poster session we will have the following contributions:
The "*" indicates people presenting the work at the poster session. In the list you can also find at which poster session they will participate. |
🔗 |
Fri 10:00 a.m. - 11:30 a.m.
|
Panel: Values in science and engineering of ML research
(
Discussion panel
)
Panelists: Danielle Belgrave, Meredith Broussard, Silvia Chiappa, Jonathan Frankle, Sandra Wachter. Moderator: Shakir Mohamed |
Danielle Belgrave · Meredith Broussard · Silvia Chiappa · Jonathan Frankle · Sandra Wachter · Shakir Mohamed · Emily Dinan 🔗 |