Skip to yearly menu bar Skip to main content


Workshop

Setting up ML Evaluation Standards to Accelerate Progress

Rishabh Agarwal · Stephanie Chan · Xavier Bouthillier · Caglar Gulcehre · Jesse Dodge

Fri 29 Apr, 5 a.m. PDT

The aim of the workshop is to discuss and propose standards for evaluating ML research, in order to better identify promising new directions and to accelerate real progress in the field of ML research. The problem requires understanding the kinds of practices that add or detract from the generalizability or reliability of results reported, and incentives for researchers to follow best practices. We may draw inspiration from adjacent scientific fields, from statistics, or history of science. Acknowledging that there is no consensus on best practices for ML, the workshop will have a focus on panel discussions and a few invited talks representing a variety of perspectives. The call to papers will welcome opinion papers as well as more technical papers on evaluation of ML methods. We plan to summarize the findings and topics that emerged during our workshop in a short report.

Call for Papers: https://ml-eval.github.io/call-for-papers/
Submission Site: https://cmt3.research.microsoft.com/SMILES2022

Chat is not available.
Timezone: America/Los_Angeles

Schedule