Skip to yearly menu bar Skip to main content


Poster

Enhancing Uncertainty Estimation and Interpretability with Bayesian Non-negative Decision Layer

XINYUE HU · Zhibin Duan · Bo Chen · Mingyuan Zhou

Hall 3 + Hall 2B #509
[ ] [ Project Page ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Although deep neural networks have demonstrated significant success due to theirpowerful expressiveness, most models struggle to meet practical requirements foruncertainty estimation. Concurrently, the entangled nature of deep neural net-works leads to a multifaceted problem, where various localized explanation tech-niques reveal that multiple unrelated features influence the decisions, thereby un-dermining interpretability. To address these challenges, we develop a BayesianNonnegative Decision Layer (BNDL), which reformulates deep neural networksas a conditional Bayesian non-negative factor analysis. By leveraging stochasticlatent variables, the BNDL can model complex dependencies and provide robustuncertainty estimation. Moreover, the sparsity and non-negativity of the latentvariables encourage the model to learn disentangled representations and decisionlayers, thereby improving interpretability. We also offer theoretical guaranteesthat BNDL can achieve effective disentangled learning. In addition, we developeda corresponding variational inference method utilizing a Weibull variational in-ference network to approximate the posterior distribution of the latent variables.Our experimental results demonstrate that with enhanced disentanglement capa-bilities, BNDL not only improves the model’s accuracy but also provides reliableuncertainty estimation and improved interpretability.

Live content is unavailable. Log in and register to view live content