SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Jul 12, 2020

Sprecher:innen

Über

This paper presents a simple framework for contrastive representation learning. The framework, SimCLR, simplifies recently proposed approaches and requires neither specific architectural modifications nor a memory bank. In order to understand what enables the contrastive prediction task to learn useful representations, we systematically study the major components in the framework. We empirically show that 1) composition of data augmentations plays a critical role in defining the predictive tasks that enable effective representation learning, 2) introducing a learned nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the representation, and 3) contrastive learning benefits from a larger batch size and more training steps compared to the supervised counterpart. By combining our findings, we improve considerably over previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on the representation of our best model achieves 76.5

Organisator

Kategorien

Über ICML 2020

The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

Präsentation speichern

Soll diese Präsentation für 1000 Jahre gespeichert werden?

Wie speichern wir Präsentationen?

Ewigspeicher-Fortschrittswert: 2 = 0.2%

Freigeben

Empfohlene Videos

Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

Interessiert an Vorträgen wie diesem? ICML 2020 folgen