Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Near Optimal Sample Complexity Bounds for Learning Latent k-polytopes and applications to Ad-Mixtures
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Near Optimal Sample Complexity Bounds for Learning Latent k-polytopes and applications to Ad-Mixtures
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Near Optimal Sample Complexity Bounds for Learning Latent k-polytopes and applications to Ad-Mixtures

            Jul 12, 2020

            Speakers

            CB

            Chiranjib Bhattacharyya

            Speaker · 0 followers

            RK

            Ravindran Kannan

            Speaker · 0 followers

            About

            Recently near-optimal bounds on sample complexity of Mixture of Gaussians was shown in the seminal paper <cit.>. No such results are known for Ad-mixtures. In this paper we show that O^*(dk/m) samples are sufficient to learn each of k- topic vectors of LDA, a popular Ad-mixture model, with vocabulary size d and m∈Ω(1) words per document, to any constant error in L_1 norm. This is a corollary of the major contribution of the current paper: the first sample complexity upper bound for the pr…

            Organizer

            I2
            I2

            ICML 2020

            Account · 2.7k followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            Mathematics

            Category · 2.4k presentations

            About ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Laguerre-Gauss Preprocessing: Line Profiles as Image Features
            05:07

            Laguerre-Gauss Preprocessing: Line Profiles as Image Features

            Alejandro Murillo-Gonzalez, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Deep Molecular Programming: A Natural Implementation of Binary-Weight ReLU Neural Networks
            14:25

            Deep Molecular Programming: A Natural Implementation of Binary-Weight ReLU Neural Networks

            Marko Vasic, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Sharp Composition Bounds for Gaussian Differential Privacy via Edgeworth Expansion
            12:45

            Sharp Composition Bounds for Gaussian Differential Privacy via Edgeworth Expansion

            Qinqing Zheng, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Feature Noise Induces Loss Discrepancy Across Groups
            14:38

            Feature Noise Induces Loss Discrepancy Across Groups

            Fereshte Khani, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Class Weighted Classification: Trade-offs and Robust Approaches
            11:48

            Class Weighted Classification: Trade-offs and Robust Approaches

            Ziyu Xu, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation
            14:46

            Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation

            Xiang Jiang, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2020