Abstract: We tackle unsupervised domain adaptation by accounting for the fact that different domains may need to be processed differently to arrive to a common feature representation effective for recognition. To this end, we introduce a deep learning framework where each domain undergoes a different sequence of operations, allowing some, possibly more complex, domains to go through more computations than others. This contrasts with state-of-the-art domain adaptation techniques that force all domains to be processed with the same series of operations, even when using multi-stream architectures whose parameters are not shared. As evidenced by our experiments, the greater flexibility of our method translates to higher accuracy. Furthermore, it allows us to handle any number of domains simultaneously.

Similar Papers

Differentiable learning of numerical rules in knowledge graphs
Po-Wei Wang, Daria Stepanova, Csaba Domokos, J. Zico Kolter,
Latent Normalizing Flows for Many-to-Many Cross-Domain Mappings
Shweta Mahajan, Iryna Gurevych, Stefan Roth,