Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Improving Variational Autoencoders with Density Gap-based Regularization
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-001-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-001-alpha.b-cdn.net
      • sl-yoda-v2-stream-001-beta.b-cdn.net
      • 1824830694.rsc.cdn77.org
      • 1979322955.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Improving Variational Autoencoders with Density Gap-based Regularization
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Improving Variational Autoencoders with Density Gap-based Regularization

            Nov 28, 2022

            Sprecher:innen

            JZ

            Jianfei Zhang

            Sprecher:in · 0 Follower:innen

            JB

            Jun Bai

            Sprecher:in · 0 Follower:innen

            CL

            Chenghua Lin

            Sprecher:in · 0 Follower:innen

            Über

            Variational autoencoders (VAEs) are one of the powerful unsupervised learning frameworks in NLP for latent representation learning and latent-directed generation. The classic optimization goal of VAEs is to maximize the Evidence Lower Bound (ELBo), which consists of a conditional likelihood for generation and a negative Kullback-Leibler (KL) divergence for regularization. In practice, optimizing ELBo often leads the posterior distribution of all samples converge to the same degenerated local opt…

            Organisator

            N2
            N2

            NeurIPS 2022

            Konto · 963 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Monte Carlo Tree Descent for Black-Box Optimization
            04:34

            Monte Carlo Tree Descent for Black-Box Optimization

            Yaoguang Zhai, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Does GNN Pretraining Help Molecular Representation?
            04:58

            Does GNN Pretraining Help Molecular Representation?

            Ruoxi Sun, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Rethinking Explainability as a Dialogue: A Practitioner's Perspective
            04:05

            Rethinking Explainability as a Dialogue: A Practitioner's Perspective

            Hima Lakkaraju, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively
            04:40

            Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively

            Haojie Zhang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Graph-Relational Distributionally Robust Optimization
            04:42

            Graph-Relational Distributionally Robust Optimization

            Fengchun Qiao, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Learning Viewpoint-Agnostic Visual Representations by Recovering Tokens in 3D Space
            05:03

            Learning Viewpoint-Agnostic Visual Representations by Recovering Tokens in 3D Space

            Jinghuan Shang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2022 folgen