Workshop
|
Fri 9:25 |
MIMSS: A Dataset to evaluate Multi-Image Multi-Spectral Super-Resolution on Sentinel 2 Muhammed Razzak · Gonzalo Mateo-Garcia · Gurvan Lecuyer · Gomez-Chova, Luis · Yarin Gal · Freddie Kalaitzis |
|
Workshop
|
Reproducible Subjective Evaluation Max Morrison · Brian Tang · Gefei Tan · Bryan Pardo |
||
Workshop
|
Increasing Confidence in Adversarial Robustness Evaluations Roland S. Zimmermann · Wieland Brendel · Florian Tramer · Nicholas Carlini |
||
Poster
|
Thu 2:30 |
Evaluating Disentanglement of Structured Representations Raphaƫl Dang-Nhu |
|
Workshop
|
A Case for Better Evaluation Standards in NLG Sebastian Gehrmann · Elizabeth Clark · Thibault Sellam |
||
Poster
|
Wed 10:30 |
CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability Martin Mundt · Steven Lang · Quentin Delfosse · Kristian Kersting |
|
Spotlight
|
Thu 10:30 |
Fairness in Representation for Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling Ada Wan |
|
Poster
|
Mon 18:30 |
Towards Evaluating the Robustness of Neural Networks Learned by Transduction Jiefeng Chen · Xi Wu · Yang Guo · Yingyu Liang · Somesh Jha |
|
Poster
|
Mon 18:30 |
On Evaluation Metrics for Graph Generative Models Rylee Thompson · Boris Knyazev · Elahe Ghalebi · Jungtaek Kim · Graham W Taylor |
|
Poster
|
Tue 10:30 |
Evaluating Distributional Distortion in Neural Language Modeling Benjamin LeBrun · Alessandro Sordoni · Timothy O'Donnell |
|
Workshop
|
A Revealing Large-Scale Evaluation of Unsupervised Anomaly Detection Algorithms Maxime Alvarez · Jean-Charles Verdier · DJeff Kanda Nkashama · Froduald Kabanza · Marc Frappier · Pierre Martin Tardif |
||
Workshop
|
Rethinking Streaming Machine Learning Evaluation Shreya Shankar · Bernease Herman · Aditya Parameswaran |