Poster
Unlearning-based Neural Interpretations
Ching Lam Choi · Alexandre Duplessis · Serge Belongie
Hall 3 + Hall 2B #555
[
Abstract
]
Oral
presentation:
Oral Session 5C
Fri 25 Apr 7:30 p.m. PDT — 9 p.m. PDT
Sat 26 Apr midnight PDT
— 2:30 a.m. PDT
Fri 25 Apr 7:30 p.m. PDT — 9 p.m. PDT
Abstract:
Gradient-based interpretations often require an anchor point of comparison to avoid saturation in computing feature importance. We show that current baselines defined using static functions—constant mapping, averaging or blurring—inject harmful colour, texture or frequency assumptions that deviate from model behaviour. This leads to accumulation of irregular gradients, resulting in attribution maps that are biased, fragile and manipulable. Departing from the static approach, we propose UNI to compute an (un)learnable, debiased and adaptive baseline by perturbing the input towards an unlearning direction of steepest ascent. Our method discovers reliable baselines and succeeds in erasing salient features, which in turn locally smooths the high-curvature decision boundaries. Our analyses point to unlearning as a promising avenue for generating faithful, efficient and robust interpretations.
Live content is unavailable. Log in and register to view live content