6. prosince 2021
Řečník · 0 sledujících
Řečník · 0 sledujících
Řečník · 0 sledujících
We derive nearly sharp non-asymptotic error bounds for the bidirectional GAN estimators under the Dudley distance between the latent joint distribution and the data joint distribution. To the best of our knowledge, this is the first theoretical guarantee for the bidirectional GAN learning approach. An appealing feature of our results is that the reference and the target distributions are not assumed to have the same dimension or bounded support. These assumptions are commonly assumed in the existing convergence analysis of the unidirectional GANs but may not be satisfied in practice. We show that the prefactors in the error bounds depend on the square root of the dimension of the target distribution. This is a significant improvement over the exponential dependence on the dimension in the existing results on the error bound for unidirectional GANs. Our results are also applicable to the Wasserstein bidirectional GAN if the target distribution is assumed to have a bounded support. To prove these results, we construct neural network functions that push forward an empirical distribution to another arbitrary empirical distribution on a possibly different-dimensional space. We also develop a novel decomposition of the integral probability metric for the error analysis of bidirectional GANs. These basic theoretical results are of independent interest and can be applied to other related learning problems.We derive nearly sharp non-asymptotic error bounds for the bidirectional GAN estimators under the Dudley distance between the latent joint distribution and the data joint distribution. To the best of our knowledge, this is the first theoretical guarantee for the bidirectional GAN learning approach. An appealing feature of our results is that the reference and the target distributions are not assumed to have the same dimension or bounded support. These assumptions are commonly assumed in the exis…
Účet · 1,9k sledujících
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Profesionální natáčení a streamování po celém světě.
Prezentace na podobné téma, kategorii nebo přednášejícího
Kangwook Kim, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Danil Kuzin, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Abhin Shah, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Itai Gat, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %