Axiomatic Explanations for Visual Search, Retrieval, and Similarity Learning

Mark Hamilton · Scott Lundberg · Stephanie Fu · Lei Zhang · William Freeman

Keywords: [ Shapley values ] [ similarity learning ] [ metric learning ] [ information retrieval ]

[ Abstract ]
[ Visit Poster at Spot E3 in Virtual World ] [ OpenReview
Thu 28 Apr 10:30 a.m. PDT — 12:30 p.m. PDT


Visual search, recommendation, and contrastive similarity learning power technologies that impact billions of users worldwide. Modern model architectures can be complex and difficult to interpret, and there are several competing techniques one can use to explain a search engine's behavior. We show that the theory of fair credit assignment provides a unique axiomatic solution that generalizes several existing recommendation- and metric-explainability techniques in the literature. Using this formalism, we show when existing approaches violate "fairness" and derive methods that sidestep these shortcomings and naturally handle counterfactual information. More specifically, we show existing approaches implicitly approximate second-order Shapley-Taylor indices and extend CAM, GradCAM, LIME, SHAP, SBSM, and other methods to search engines. These extensions can extract pairwise correspondences between images from trained opaque-box models. We also introduce a fast kernel-based method for estimating Shapley-Taylor indices that require orders of magnitude fewer function evaluations to converge. Finally, we show that these game-theoretic measures yield more consistent explanations for image similarity architectures.

Chat is not available.