Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-005-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-005-alpha.b-cdn.net
      • sl-yoda-v2-stream-005-beta.b-cdn.net
      • 1034628162.rsc.cdn77.org
      • 1409346856.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning

            Dez 10, 2023

            Sprecher:innen

            XW

            Xinyi Wang

            Sprecher:in · 0 Follower:innen

            WZ

            Wanrong Zhu

            Sprecher:in · 0 Follower:innen

            MS

            Michael Saxon

            Sprecher:in · 0 Follower:innen

            Über

            In recent years, pre-trained large language models (LLMs) have demonstrated remarkable efficiency in achieving an inference-time few-shot learning capability known as in-context learning. However, existing literature has highlighted the sensitivity of this capability to the selection of few-shot demonstrations. Current understandings of the underlying mechanisms by which this capability arises from regular language model pretraining objectives remain disconnected from the real-world LLMs. This s…

            Organisator

            N2
            N2

            NeurIPS 2023

            Konto · 648 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Scaling Robotics with Foundation Models
            18:41

            Scaling Robotics with Foundation Models

            Keerthana Gopalakrishnan, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Learning a 1-layer conditional generative model in total variation
            04:52

            Learning a 1-layer conditional generative model in total variation

            Ajil Jalal, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            On Transferring Expert Knowledge from Tabular Data to Images
            04:50

            On Transferring Expert Knowledge from Tabular Data to Images

            Jun-Peng Jiang, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Thrust: Adaptively Propels Large Language Models with External Knowledge
            05:22

            Thrust: Adaptively Propels Large Language Models with External Knowledge

            Xinran Zhao, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            SNEkhorn: Dimension Reduction with Symmetric Entropic Affinities
            04:55

            SNEkhorn: Dimension Reduction with Symmetric Entropic Affinities

            Hugues Van Assel, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            A Multi-modal Global Instance Tracking Benchmark (MGIT): Better Locating Target in Complex Spatio-temporal and Casual Relationship
            04:42

            A Multi-modal Global Instance Tracking Benchmark (MGIT): Better Locating Target in Complex Spatio-temporal and Casual Relationship

            Shiyu Hu, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2023 folgen