Oral
in
Affinity Workshop: Tiny Papers Oral Session 4
Evaluating Groups of Features via Consistency, Contiguity, and Stability
Chaehyeon Kim · Weiqiu You · Shreya Havaldar · Eric Wong
[
Abstract
]
[ Project Page ]
Abstract:
Feature attributions explain model predictions by assigning importance scores to input features. In high-dimensional data such as images, these scores are often assigned to groups of features at a time. There are a variety of strategies for creating these groups, ranging from simple patches to deep-learning-based segmentation algorithms. What makes certain groups better than others for explanations? We formally define three key criteria for interpretable groups of features: consistency, contiguity, and stability. Surprisingly, we find that patch-based groups outperform groups created via modern segmentation tools.
Live content is unavailable. Log in and register to view live content