Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Meta-Complementing the Semantics of Short Texts in Neural Topic Models
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-008-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-008-alpha.b-cdn.net
      • sl-yoda-v2-stream-008-beta.b-cdn.net
      • 1159783934.rsc.cdn77.org
      • 1511376917.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Meta-Complementing the Semantics of Short Texts in Neural Topic Models
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Meta-Complementing the Semantics of Short Texts in Neural Topic Models

            Nov 28, 2022

            Sprecher:innen

            DCZ

            Delvin Ce Zhang

            Sprecher:in · 0 Follower:innen

            HWL

            Hady W. Lauw

            Sprecher:in · 0 Follower:innen

            Über

            Topic models infer latent topic distributions based on observed word co-occurrences in a text corpus. While typically a corpus contains documents of variable lengths, most previous topic models treat documents of different lengths uniformly, assuming that each document is sufficiently informative. However, shorter documents may have only a few word co-occurrences, resulting in inferior topic quality. Some other previous works assume that all documents are short, and leverage external auxiliary d…

            Organisator

            N2
            N2

            NeurIPS 2022

            Konto · 961 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Quo Vadis: Is Trajectory Forecasting the Key Towards Long-Term Multi-Object Tracking?
            05:03

            Quo Vadis: Is Trajectory Forecasting the Key Towards Long-Term Multi-Object Tracking?

            Patrick Dendorfer, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Regularized Molecular Conformation Fields
            01:05

            Regularized Molecular Conformation Fields

            Lihao Wang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Creative Culture and Machine Learning
            1:46:18

            Creative Culture and Machine Learning

            Negar Rostamzadeh, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Test-time adaptation with slot-centric models
            05:20

            Test-time adaptation with slot-centric models

            Mihir Prabhudesai, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Minimax Optimal Fair Regression under Linear Model
            03:06

            Minimax Optimal Fair Regression under Linear Model

            Kazuto Fukuchi, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Point Transformer V2: Grouped Vector Attention and Improved Sampling
            04:41

            Point Transformer V2: Grouped Vector Attention and Improved Sampling

            Xiaoyang Wu, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2022 folgen