Poster
in
Workshop: 5th Workshop on practical ML for limited/low resource settings (PML4LRS) @ ICLR 2024
Multi-source Fully Test-Time Adaptation
Yuntao Du. · Siqi Luo · Yi Xin · MingCai Chen · Shuai Feng · Mujie Zhang · Chongjun Wang
Deep neural networks often generalizes poorly when the distribution of test samples varies from that of the training samples. Recently, some fully test-time adaptation methods have been proposed to adapt the trained model with the unlabeled test samples before prediction. Despite achieving remarkable results, these methods only involve one trained model, which could only provide certain side information for the test samples. In real-world scenarios, there could be multiple available trained models that are beneficial to the test samples and these models are complementary to each other. Consequently, to better utilize these trained models, in this paper, we propose the problem of multi-source fully test-time adaptation to adapt multiple trained models to the test samples. To achieve this, we introduce a simple yet effective method utilizing a weighted aggregation scheme and introduce two unsupervised losses. The former could adaptively assign a higher weight to a more relevant model, while the latter could jointly adapt models with online unlabeled samples. Extensive experiments on three image classification datasets show that the proposed method achieves better results than baseline methods.