Jul 12, 2020
Learning domain-invariant representations is a popular approach to unsupervised domain adaptation, i.e., generalizing from a source domain with labels to an unlabeled target domain. In this work, we aim to better understand and estimate the effect of domain-invariant representations on generalization to the target. In particular, we study the effect of the complexity of the latent, domain-invariant representation, and find that it has a significant influence on the target risk. Based on these findings, we propose a general approach for addressing this complexity tradeoff in neural networks. We also propose a method for estimating how well a model based on domain-invariant representations will perform on the target domain, without having seen any target labels. Applications of our results include model selection, deciding early stopping, and predicting the adaptability of a model between domains.
The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Presentations on similar topic, category or speaker