Fairness-Aware Multi-view Evidential Learning with Adaptive Prior
Abstract
Multi-view evidential learning harnesses diverse data sources to improve prediction performance and provide reliable uncertainty estimates. Recent advances have primarily focused on optimizing evidence fusion strategies, assuming that the evidence extracted from each view is naturally reliable for downstream integration. However, our empirical analysis reveals that samples tend to be assigned biased evidence to support data-rich classes, thereby rendering unfair uncertainty estimations. This motivates us to delve into a new Biased Evidential Multi-view Learning (BEML) problem. To this end, we propose Fairness-Aware Multi-view Evidential Learning (FAML) method to rectify biased evidence learning. Specifically, FAML introduces the training-trajectory-based adaptive prior into the construction of Dirichlet parameters, flexibly calibrating the initial support evidence assigned to each class during training. Furthermore, we incorporate a fairness constraint as a regularization term to alleviate bias in the evidence. In the multi-view fusion stage, we propose an opinion alignment mechanism to mitigate view-specific bias across views, thereby encouraging the integration of consistent and mutually supportive evidence. Theoretical analysis shows that FAML effectively achieves less biased evidence allocation. Extensive experiments on real-world multi-view datasets demonstrate the superiority of our FAML, in terms of prediction performance and uncertainty estimation.