Over the past two decades, high-throughput data collection technologies have become commonplace in most fields of science and technology, and with them an ever-increasing amount of big high dimensional data is being generated by virtually every real-world system. While such data systems are highly diverse in nature, the underlying data analysis and exploration task give rise to common challenges at the core of modern representation learning. For example, even though modern real-world data typically have high dimensional ambient measurement spaces, they often exhibit low dimensional intrinsic structures that can be uncovered by geometry-oriented methods, such as the ones encountered in manifold learning, graph signal processing, geometric deep learning, and topological data analysis. As a result, recent years have seen significant interest and progress in geometric and topological approaches to representation learning,whichenabletractableexploratoryanalysisbydomainexpertswhoareoftennotcomputationoriented. Our overarching goal in the proposed workshop is to deepen our understanding of the challenges and opportunities in this field, while breaking the barriers between the typically disjoint computational approaches (or communities) that work in this field, with emphasis on the domains of topological data analysis, graph representation learning, and manifold learning, on which we shall subsequently briefly comment.
Website: https://gt-rl.github.io/
Fri 5:00 a.m. - 5:15 a.m.
|
Welcome (on Gather.Town)
(
Live
)
link »
Please note that you have to be logged in to your ICLR account in order to join Gather.Town. |
🔗 |
Fri 5:15 a.m. - 5:45 a.m.
|
Geometric Deep Learning
(
Foundation Talk
)
SlidesLive Video » |
Fernando Gama 🔗 |
Fri 5:15 a.m. - 5:45 a.m.
|
Topological Data Analysis
(
Foundation Talk
)
SlidesLive Video » |
Théo Lacombe 🔗 |
Fri 5:45 a.m. - 6:00 a.m.
|
Opening Remarks
(
Opening remarks (live)
)
|
🔗 |
Fri 6:00 a.m. - 7:00 a.m.
|
Panel: High Impact in Practice
(
Live Panel Discussion
)
|
🔗 |
Fri 7:00 a.m. - 7:30 a.m.
|
Interpretable Recommender System With Heterogeneous Information: A Geometric Deep Learning Perspective
(
Invited Talk
)
SlidesLive Video » Recommender systems (RS) are ubiquitous in digital space. This paper develops a deep learning-based approach to address three practical challenges in RS: complex structures of high-dimensional data, noise in relational information, and the black-box nature of machine learning algorithms. Our method—Multi-GraphGraph Attention Network (MG-GAT)—learns latent user and business representations by aggregating a diverse set of information from neighbors of each user (business) on a neighbor importance graph. MG-GAT out-performs state-of-the-art deep learning models in the recommendation task using two large-scale datasets collected from Yelp and four other standard datasets in RS. The improved performance highlights MG-GAT’s advantage in incorporating multi-modal features in a principled manner. The features importance, neighbor importance graph, and latent representations reveal business insights on predictive features and explainable characteristics of business and users. Moreover, the learned neighbor importance graph can be used in a variety of management applications, such as targeting customers, promoting new businesses, and designing information acquisition strategies. Our paper presents a quintessential big data application of deep learning models in management while providing interpretability essential for real-world decision-making. |
Yan Leng 🔗 |
Fri 7:00 a.m. - 7:30 a.m.
|
Marinka Zitnik: Few-Shot Learning for Network Biology
(
Invited Talk
)
SlidesLive Video » |
Marinka Zitnik 🔗 |
Fri 7:30 a.m. - 7:45 a.m.
|
LambdaZero— Exascale Search of Molecules
(
Case Study Talk
)
link »
Drug discovery is lengthy process which costs 2.6B per approved drug on average. The cost is so high because finding a small molecule that binds to a particular target is a perilous highly uncertain process that can’t be completed in a decade in many of the cases. And even when a molecule is found it might not be doing its job very well and might very likely fail in later stages of trials. Our goal is to find much better drug candidate molecules and do it very fast. |
Maksym Korablyov 🔗 |
Fri 7:45 a.m. - 8:00 a.m.
|
Topological Representations in Functional Neuroimaging
(
Case Study Talk
)
link »
This talk will discuss recent work in topological representation learning for functional neuroimaging. |
Tristan Yates 🔗 |
Fri 8:00 a.m. - 9:00 a.m.
|
Panel: Beyond Persistence
(
Live Panel Discussion
)
|
🔗 |
Fri 9:00 a.m. - 9:30 a.m.
|
Lunch (EST) / Dinner (CET)
|
🔗 |
Fri 9:30 a.m. - 10:30 a.m.
|
Poster Session I
(
Poster session on Gather.Town
)
|
🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Javier Arsuaga: Topological Analysis of Cancer Genomes
(
Invited Talk
)
link »
In this talk, Prof. Arsuaga will discuss recent work on TDA for Cancer Genome Analysis. |
Javier Arsuaga 🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Gal Mishne: Visualizing the PHATE of deep neural networks
(
Invited Talk
)
SlidesLive Video » Despite their massive popularity, deep networks are difficult to interpret or analyze. Their design and training is often driven by intuition and their tuning performed via exhaustive hyper-parameter search. More principled evaluations and explorations of deep networks to understand why and how certain neural networks outperform others is critical for faster prototyping, reduced training times and better interpretability. In this talk I present a novel visualization algorithm that reveals the internal geometry of such networks: Multislice PHATE (M-PHATE), the first method designed explicitly to visualize how a neural network's hidden representations of data evolve throughout the course of training. Our approach depends on the construction of a multi-slice graph that captures both the dynamics and the community structure of the hidden units. Our visualization provides a more detailed feedback to the deep learning practitioner beyond simple global measures (validation loss and accuracy), and without the need to access validation data. We demonstrate comparing different neural networks with M-PHATE in two vignettes: continual learning and generalization. |
Gal Mishne 🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Caroline Weis: MALDI-TOF Mass Spectrometry for Antimicrobial Resistance Prediction
(
Case Study Talk
)
SlidesLive Video » |
Caroline Weis 🔗 |
Fri 11:30 a.m. - 12:30 p.m.
|
Panel: Beyond Message Passing
(
Live Panel Discussion
)
|
🔗 |
Fri 12:30 p.m. - 1:30 p.m.
|
Poster Session II (incl. coffee break)
(
Poster session on Gather.Town
)
|
🔗 |
Fri 12:30 p.m. - 12:45 p.m.
|
Bishnu Sarker: Prot-A-GAN: Automatic Functional Annotation of Proteins
(
Case Study Talk
)
SlidesLive Video » |
Bishnu Sarker 🔗 |
Fri 1:30 p.m. - 2:30 p.m.
|
Panel: Manifold Learning 2.0
(
Live Panel Discussion
)
|
🔗 |
Fri 2:30 p.m. - 2:40 p.m.
|
Closing Remarks
|
🔗 |
Fri 2:40 p.m. - 2:50 p.m.
|
Directional Graph Networks
(
Spotlight
)
SlidesLive Video » |
Dominique Beaini · Saro Passaro · Vincent Létourneau · William Hamilton · Gabriele Corso · Pietro Liò 🔗 |
Fri 2:50 p.m. - 3:00 p.m.
|
Don't Stack Layers in Graph Neural Networks, Wire Them Randomly
(
Spotlight
)
SlidesLive Video » |
Diego Valsesia · Giulia Fracastoro · Enrico Magli 🔗 |
Fri 3:00 p.m. - 3:10 p.m.
|
Geometry Encoding for Numerical Simulations
(
Spotlight
)
SlidesLive Video » |
Amir Maleki · Jan F Heyse · Rishikesh Ranade · Haiyang He · Priya Kasimbeg · Jay Pathak 🔗 |
Fri 3:10 p.m. - 3:20 p.m.
|
On Linear Interpolation in the Latent Space of Deep Generative Models
(
Spotlight
)
SlidesLive Video » |
Mike Yan Michelis · Quentin Becker 🔗 |
Fri 3:20 p.m. - 3:30 p.m.
|
Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks
(
Spotlight
)
SlidesLive Video » |
Cristian Bodnar · Fabrizio Frasca · Yuguang Wang · Nina Otter · Guido Montufar · Pietro Liò · Michael Bronstein 🔗 |
Fri 3:30 p.m. - 4:00 p.m.
|
Farewell (on Gather.Town)
(
Live
)
|
🔗 |