Jul 12, 2020
Models trained on synthetic images often face degraded generalization to real data. To remedy such domain gaps, synthetic training often starts with ImageNet pretrained models in domain generalization and adaptation as they contain the representation from real images. However, the role of ImageNet representation is seldom discussed despite common practices that leverage this knowledge implicitly to maintain generalization ability. An example is the careful hand tuning of learning rates across different network layers which can be laborious and non-scalable. We treat this as a learning without forgetting problem and propose a learning-to-optimize (L2O) method to automate layer-wise learning rates. With comprehensive experiments, we demonstrate that the proposed method can significantly improve the synthetic-to-real generalization performance without seeing and training on real data, while benefiting downstream tasks such as domain adaptation.
The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Presentations on similar topic, category or speaker