Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: How To Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-005-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-005-alpha.b-cdn.net
      • sl-yoda-v2-stream-005-beta.b-cdn.net
      • 1034628162.rsc.cdn77.org
      • 1409346856.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            How To Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            How To Trust Your Diffusion Model: A Convex Optimization Approach to Conformal Risk Control

            Jul 24, 2023

            Speakers

            JT

            Jacopo Teneggi

            Speaker · 0 followers

            MT

            Matthew Tivnan

            Speaker · 0 followers

            WJS

            Webster J. Stayman

            Speaker · 0 followers

            About

            Score-based generative modeling, informally referred to as diffusion models, continue to grow in popularity across several important domains and tasks. While they provide high-quality and diverse samples from empirical distributions, important questions remain on the reliability and trustworthiness of these sampling procedures for their responsible use in critical scenarios. Conformal prediction is a modern tool to construct finite-sample, distribution-free uncertainty guarantees for any black-b…

            Organizer

            I2
            I2

            ICML 2023

            Account · 657 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Multiply Robust Off-policy Evaluation and Learning under Truncation by Death
            04:40

            Multiply Robust Off-policy Evaluation and Learning under Truncation by Death

            Jianing Chu, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            High-dimensional Optimization in the Age of ChatGPT
            43:37

            High-dimensional Optimization in the Age of ChatGPT

            Sanjeev Arora

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Which is Better for Learning with Noisy Labels: The Semi-supervised Method or Modeling Label Noise?
            05:00

            Which is Better for Learning with Noisy Labels: The Semi-supervised Method or Modeling Label Noise?

            Yu Yao, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            CounTS: A Self-Interpretable Time Series Prediction with Counterfactual Explanations
            07:27

            CounTS: A Self-Interpretable Time Series Prediction with Counterfactual Explanations

            Jingquan Yan, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Offline Learning in Markov Games with General Function Approximation
            05:15

            Offline Learning in Markov Games with General Function Approximation

            Yuheng Zhang, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Scalable Set Encoding with Universal Mini-Batch Consistency and Unbiased Full Set Gradient Approximation
            05:19

            Scalable Set Encoding with Universal Mini-Batch Consistency and Unbiased Full Set Gradient Approximation

            Jeffrey Willette, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2023