Workshop
PAIR^2Struct: Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data
Hao Wang 路 Wanyu LIN 路 Hao He 路 Di Wang 路 Chengzhi Mao 路 Muhan Zhang
Fri 29 Apr, 9 a.m. PDT
In these years, we have seen principles and guidance relating to accountable and ethical use of artificial intelligence (AI) spring up around the globe. Specifically, Data Privacy, Accountability, Interpretability, {\bf R}obustness, and Reasoning have been broadly recognized as fundamental principles of using machine learning (ML) technologies on decision-critical and/or privacy-sensitive applications. On the other hand, in tremendous real-world applications, data itself can be well represented as various structured formalisms, such as graph-structured data (e.g., networks), grid-structured data (e.g., images), sequential data (e.g., text), etc. By exploiting the inherently structured knowledge, one can design plausible approaches to identify and use more relevant variables to make reliable decisions, thereby facilitating real-world deployments.In this workshop, we will examine the research progress towards accountable and ethical use of AI from diverse research communities, such as the ML community, security \& privacy community, and more. Specifically, we will focus on the limitations of existing notions on Privacy, Accountability, Interpretability, Robustness, and Reasoning. We aim to bring together researchers from various areas (e.g., ML, security \& privacy, computer vision, and healthcare) to facilitate discussions including related challenges, definitions, formalisms, and evaluation protocols regarding the accountable and ethical use of ML technologies in high-stake applications with structured data. In particular, we will discuss the interplay among the fundamental principles from theory to applications. We aim to identify new areas that call for additional research efforts. Additionally, we will seek possible solutions and associated interpretations from the notion of causation, which is an inherent property of systems. We hope that the proposed workshop is fruitful in building accountable and ethical use of AI systems in practice.
Schedule
Fri 9:00 a.m. - 9:05 a.m.
|
Introduction and Opening Remark
(
Introduction and Opening Remark
)
>
SlidesLive Video |
Hao Wang 路 Wanyu LIN 馃敆 |
Fri 9:05 a.m. - 9:30 a.m.
|
On the Foundations of Causal Artificial Intelligence
(
Invited Talk
)
>
|
Elias Bareinboim 馃敆 |
Fri 9:30 a.m. - 9:35 a.m.
|
Q&A with Elias Bareinboim
(
Q&A
)
>
|
Elias Bareinboim 馃敆 |
Fri 9:35 a.m. - 10:05 a.m.
|
Privacy Meter Project: Towards Auditing Data Privacy and Q&A
(
Invited Talk
)
>
SlidesLive Video |
Reza Shokri 馃敆 |
Fri 10:05 a.m. - 10:15 a.m.
|
Rethinking Stability for Attribution-based Explanations
(
Oral
)
>
link
SlidesLive Video |
Chirag Agarwal 路 Nari Johnson 路 Martin Pawelczyk 路 Satyapriya Krishna 路 Eshika Saxena 路 Marinka Zitnik 路 Hima Lakkaraju 馃敆 |
Fri 10:15 a.m. - 10:40 a.m.
|
Trustworthy Machine Learning via Logic Reasoning
(
Invited Talk
)
>
|
Bo Li 馃敆 |
Fri 10:40 a.m. - 10:45 a.m.
|
Q&A with Bo Li
(
Q&A
)
>
|
Bo Li 馃敆 |
Fri 10:45 a.m. - 11:10 a.m.
|
Quantifying Privacy Risks of Machine Learning Models
(
Invited Talk
)
>
|
Yang Zhang 馃敆 |
Fri 11:10 a.m. - 11:15 a.m.
|
Q&A with Yang Zhang
(
Q&A
)
>
|
Yang Zhang 馃敆 |
Fri 11:15 a.m. - 11:25 a.m.
|
Invariant Causal Representation Learning for Generalization in Imitation and Reinforcement Learning
(
Oral
)
>
link
SlidesLive Video |
Chaochao Lu 路 Jos茅 Miguel Hern谩ndez Lobato 路 Bernhard Schoelkopf 馃敆 |
Fri 11:25 a.m. - 1:30 p.m.
|
Poster Session 1 ( Poster Session ) > link | 馃敆 |
Fri 1:30 p.m. - 1:55 p.m.
|
Interpretable AI for Medical Imaging
(
Invited Talk
)
>
|
Lei Xing 馃敆 |
Fri 1:55 p.m. - 2:00 p.m.
|
Q&A with Lei Xing
(
Q&A
)
>
|
Lei Xing 馃敆 |
Fri 2:00 p.m. - 2:25 p.m.
|
Learning Structured Dynamics Models for Physical Reasoning and Robot Manipulation
(
Invited Talk
)
>
|
Jiajun Wu 馃敆 |
Fri 2:25 p.m. - 2:30 p.m.
|
Q&A with Jiajun Wu
(
Q&A
)
>
|
Jiajun Wu 馃敆 |
Fri 2:30 p.m. - 2:40 p.m.
|
Maximizing Entropy on Adversarial Examples Can Improve Generalization
(
Oral
)
>
link
SlidesLive Video |
Amrith Setlur 路 Benjamin Eysenbach 馃敆 |
Fri 2:40 p.m. - 3:05 p.m.
|
Adapting Deep Predictors Under Causally Structured Shifts
(
Invited Talk
)
>
|
Zachary Lipton 馃敆 |
Fri 3:05 p.m. - 3:10 p.m.
|
Q&A with Zachary Lipton
(
Q&A
)
>
|
Zachary Lipton 馃敆 |
Fri 3:10 p.m. - 3:35 p.m.
|
Explainable AI in Practice: Challenges and Opportunities
(
Invited Talk
)
>
|
Himabindu Lakkaraju 馃敆 |
Fri 3:35 p.m. - 3:40 p.m.
|
Q&A with Himabindu Lakkaraju
(
Q&A
)
>
|
Himabindu Lakkaraju 馃敆 |
Fri 3:40 p.m. - 3:50 p.m.
|
Node-Level Differentially Private Graph Neural Networks
(
Oral
)
>
link
SlidesLive Video |
Ameya Daigavane 路 Gagan Madan 路 Aditya Sinha 路 Abhradeep Guha Thakurta 路 Gaurav Aggarwal 路 Prateek Jain 馃敆 |
Fri 3:50 p.m. - 4:40 p.m.
|
Panel
(
Panel
)
>
|
馃敆 |
Fri 4:40 p.m. - 6:00 p.m.
|
Poster Session 2 ( Poster Session ) > link | 馃敆 |
-
|
REVERSING ADVERSARIAL ATTACKS WITH MULTIPLE SELF SUPERVISED TASKS ( Poster ) > link | Matthew Lawhon 路 Chengzhi Mao 路 Gustave Ducrest 路 Junfeng Yang 馃敆 |
-
|
Global Counterfactual Explanations: Investigations, Implementations and Improvements ( Poster ) > link | Dan Ley 路 Saumitra Mishra 路 Daniele Magazzeni 馃敆 |
-
|
Saliency Maps Contain Network "Fingerprints" ( Poster ) > link | Amy Widdicombe 路 Been Kim 路 Simon Julier 馃敆 |
-
|
Geometrically Guided Saliency Maps ( Poster ) > link | Md Mahfuzur Rahman 路 Noah Lewis 路 Sergey Plis 馃敆 |
-
|
ConceptDistil: Model-Agnostic Distillation of Concept Explanations ( Poster ) > link | Jo茫o Pedro Sousa 路 Ricardo Moreira 路 Vladimir Balayan 路 Pedro Saleiro 路 Pedro Bizarro 馃敆 |
-
|
Data Poisoning Attacks on Off-Policy Policy Evaluation Algorithms ( Poster ) > link | Elita Lobo 路 Harvineet Singh 路 Marek Petrik 路 Cynthia Rudin 路 Hima Lakkaraju 馃敆 |
-
|
Efficient Privacy-Preserving Inference for Convolutional Neural Networks ( Poster ) > link | Han Xuanyuan 路 Francisco Vargas 路 Stephen Cummins 馃敆 |
-
|
Post-hoc Concept Bottleneck Models ( Poster ) > link | Mert Yuksekgonul 路 Maggie Wang 路 James Y Zou 馃敆 |
-
|
CLIP-Dissect: Automatic description of neuron representations in deep vision networks ( Poster ) > link | Tuomas Oikarinen 路 Tsui-Wei Weng 馃敆 |
-
|
Robust Randomized Smoothing via Two Cost-Effective Approaches ( Poster ) > link | Linbo Liu 路 Trong Hoang 路 Lam Nguyen 路 Tsui-Wei Weng 馃敆 |
-
|
Graphical Clusterability and Local Specialization in Deep Neural Networks ( Poster ) > link | Stephen Casper 路 Shlomi Hod 路 Daniel Filan 路 Cody Wild 路 Andrew Critch 路 Stuart Russell 馃敆 |
-
|
Sparse Logits Suffice to Fail Knowledge Distillation ( Poster ) > link | Haoyu Ma 路 Yifan Huang 路 Hao Tang 路 Chenyu You 路 Deying Kong 路 Xiaohui Xie 馃敆 |
-
|
User-Level Membership Inference Attack against Metric Embedding Learning ( Poster ) > link | Guoyao Li 路 Shahbaz Rezaei 路 Xin Liu 馃敆 |
-
|
Towards Differentially Private Query Release for Hierarchical Data ( Poster ) > link | Terrance Liu 路 Steven Wu 馃敆 |
-
|
Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity ( Poster ) > link | Shiyun Xu 路 Zhiqi Bu 路 Pratik A Chaudhari 路 Ian Barnett 馃敆 |
-
|
Neural Logic Analogy Learning ( Poster ) > link | Yujia Fan 路 Yongfeng Zhang 馃敆 |
-
|
Rethinking Stability for Attribution-based Explanations ( Poster ) > link | Chirag Agarwal 路 Nari Johnson 路 Martin Pawelczyk 路 Satyapriya Krishna 路 Eshika Saxena 路 Marinka Zitnik 路 Hima Lakkaraju 馃敆 |
-
|
Maximizing entropy on adversarial examples can improve generalization ( Poster ) > link | Amrith Setlur 路 Benjamin Eysenbach 馃敆 |
-
|
Node-Level Differentially Private Graph Neural Networks ( Poster ) > link | Ameya Daigavane 路 Gagan Madan 路 Aditya Sinha 路 Abhradeep Guha Thakurta 路 Gaurav Aggarwal 路 Prateek Jain 馃敆 |
-
|
Invariant Causal Representation Learning for Generalization in Imitation and Reinforcement Learning ( Poster ) > link | Chaochao Lu 路 Jos茅 Miguel Hern谩ndez Lobato 路 Bernhard Schoelkopf 馃敆 |