Data coupled with the right algorithms offers the potential to save lives, protect the environment and increase profitability in different applications and domains. This potential, however, can be severely inhibited by adverse data properties specifically resulting in poor model performance, failed projects, and potentially serious social implications. This workshop will examine representation learning in the context of limited and sparse training samples, class imbalance, long-tailed distributions, rare cases and classes, and outliers. Speakers and participants will discuss the challenges and risks associated with designing, developing and learning deep representations from data with adverse properties. In addition, the workshop aims to connect researchers devoted to these topics in the traditional shallow representation learning research community and the more recent deep learning community, in order to advance novel and holistic solutions. Critically, given the growth in the application of AI to real-world decision making, the workshop will also facilitate a discussion of the potential social issues associated with application of deep representation learning in the context of data adversity. The workshop will bring together theoretical and applied deep learning researchers from academia and industry, and lay the groundwork for fruitful research collaborations that span communities that are often siloed.
Welcome from the Organisers | |
Hugo Larochelle, Google Brain Montréal, Adjunct Professor at Université de Montréal and a Canada CIFAR Chair (Invited Talk) | |
Hugo Larochelle (Invited Talk Q & A) | |
Voice2Series: Reprogramming Acoustic Models for Time Series Classification (Spotlights Session 1) | |
Density Approximation in Deep Generative Models with Kernel Transfer Operators (Spotlights Session 1) | |
Adversarial Data Augmentation Improves Unsupervised Machine Learning (Spotlights Session 1) | |
On Adversarial Robustness: A Neural Architecture Search perspective (Spotlights Session 1) | |
Submodular Mutual Information for Targeted Data Subset Selection (Spotlights Session 1) | |
Data-Efficient Training of Autoencoders for Mildly Non-Linear Problems (Spotlights Session 1) | |
Min-Entropy Sampling Might Lead to Better Generalization in Deep Text Classification, Nimrah Shakeel (Spotlights Session 1) | |
Coffee Break + Gathertown Virtual Poster Session 1 | |
Nitesh Chawla, Frank M. Freimann Professor of Computer Science & Engineering and Director of Lucy Family Institute for Data and Society at the University of Notre Dame (Invited Talk) | |
Nitesh Chawla (Invited Talk Q & A) | |
Breakout discussion session | |
Lunch Break and Gather.town Discussion Sessions (Lunch Break) | |
Bharath Hariharan, Assistant Professor of Computer Science at Cornell University (Invited Talk) | |
Bharath Hariharan (Invited Talk Q & A) | |
Leveraging Unlabelled Data through Semi-supervised Learning to Improve the Performance of a Marine Mammal Classification System (Spotlights Session 2) | |
Continuous Weight Balancing (Spotlights Session 2) | |
Deep Kernels with Probabilistic Embeddings for Small-Data Learning (Spotlights Session 2) | |
Boosting Classification Accuracy of Fertile Sperm Cell Images leveraging cDCGAN (Spotlights Session 2) | |
Towards Robustness to Label Noise in Text Classification via Noise Modeling (Spotlights Session 2) | |
DeepSMOTE: Deep Learning for Imbalanced Data (Spotlights Session 2) | |
Coffee Break + Gathertown Virtual Poster Session 2 | |
Alex Hanna, Senior Research Scientist on the Ethical AI team at Google (Invited Talk) | |
Alex Hanna (Invited Talk Q & A) | |
Round Table Panel Discussion | |
Concluding Remarks by the organisers | |