Dec 6, 2021
Speaker · 0 followers
Speaker · 0 followers
Speaker · 0 followers
Speaker · 0 followers
Unsupervised domain adaptation aims to align a labeled source and an unlabeled target domain, but it requires to access the source data which often raises concerns in data privacy, data portability as well as data transmission efficiency. We study unsupervised model adaptation (UMA), an alternative setting that aims to adapt source-trained models towards target distributions without accessing source data. To this end, we design an innovative historical contrastive learning (HCL) technique that exploits historical source hypothesis to make up for the absence of source data in UMA. HCL addresses the UMA challenge from two perspectives. First, we design historical contrastive instance discrimination (HCID) which learns from target samples by contrasting their features that are generated by the currently adapted model and the historical models. With the source-trained and earlier-epoch models as the historical models, HCID encourages UMA to learn instance-level discriminative representations while preserving the source hypothesis. Second, we design historical contrastive category discrimination (HCCD) that pseudo-labels target samples to learn category-level discriminative representations. Instead of globally thresholding pseudo labels, HCCD re-weights pseudo labels according to their prediction consistency across the current and historical models. Extensive experiments show that HCL outperforms and complements state-of-the-art methods consistently across a variety of visual tasks (e.g., segmentation, classification and detection) and setups (e.g., close-set, open-set and partial adaptation).Unsupervised domain adaptation aims to align a labeled source and an unlabeled target domain, but it requires to access the source data which often raises concerns in data privacy, data portability as well as data transmission efficiency. We study unsupervised model adaptation (UMA), an alternative setting that aims to adapt source-trained models towards target distributions without accessing source data. To this end, we design an innovative historical contrastive learning (HCL) technique that e…
Account · 1.9k followers
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Sucheol Lee, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Ziming Liu, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Tianda Li, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%