Dec 6, 2021
Speaker · 0 followers
Speaker · 1 follower
Speaker · 0 followers
Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow us to sample from class-conditional distributions. Existing cGAN works are based on a wide range of different architectures and objectives. One popular architecture in earlier works is to include a classifier during training with the assumption that good classifiers can help eliminate samples generated with wrong classes. Nevertheless, including classifiers in cGANs often comes with a side effect of only generating easy-to-classify samples. Recently, some representative cGANs avoid the side effect and reach state-of-the-art performance without having classifiers. Somehow it is not clear whether the classifiers can be resurrected to design better cGANs. In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs. We start by using the decomposition of the joint probability distribution to connect the goals of the cGAN and classifier as a unified framework. The framework, along with a classic energy model to parameterize the distribution, justifies the use of classifiers for cGANs in a principled manner. In addition, it explains several popular cGAN variants, such as ACGAN, ProjGAN, and ContraGAN, as special cases with different levels of approximations. Experimental results demonstrate that the framework outperforms state-of-the-art cGANs on benchmark datasets, especially on the harder Tiny ImageNet.Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow us to sample from class-conditional distributions. Existing cGAN works are based on a wide range of different architectures and objectives. One popular architecture in earlier works is to include a classifier during training with the assumption that good classifiers can help eliminate samples generated with wrong classes. Nevertheless, including classifiers in cGANs often comes with a side effect of on…
Account · 1.9k followers
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Yu Li, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Théo Jaunet, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Ellen Park, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Yu-Chia Chen, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%