Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Future of Machine Learning Data Practices and Repositories

A Guide to Misinformation Detection Datasets

Camille Thibault · Jacob-Junqi Tian · Gabrielle Péloquin-Skulski · Taylor Curtis · Florence Laflamme · James Zhou · Yuxiang Guan · Reihaneh Rabbany · Jean-François Godbout · Kellin Pelrine


Abstract:

Misinformation is a complex societal issue, and mitigating solutions are difficult to create due to data deficiencies. To address this problem, we have curated the largest collection of (mis)information datasets in the literature, totaling 75. From these, we evaluated the quality of all of the 36 datasets that consist of statements or claims, as well as the 9 datasets that consists of data in purely paragraph form. We assess these datasets to identify those with solid foundations for empirical work and those with flaws that could result in misleading and non-generalizable results, such as insufficient label quality, spurious correlations, or political bias. We further provide state-of-the-art baselines on all these datasets, but show that regardless of label quality, categorical labels may no longer give an accurate evaluation of detection model performance. We discuss alternatives to mitigate this problem. Overall, this guide aims to provide a roadmap for obtaining higher quality data and conducting more effective evaluations, ultimately improving research in misinformation detection.

Chat is not available.