Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Think Big, Teach Small: Do Language Models Distil Occam’s Razor?
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-011-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-011-alpha.b-cdn.net
      • sl-yoda-v3-stream-011-beta.b-cdn.net
      • 1150868944.rsc.cdn77.org
      • 1511650057.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Think Big, Teach Small: Do Language Models Distil Occam’s Razor?
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

            Dec 6, 2021

            Speakers

            GJ

            Gonzalo Jaimovitch

            Speaker · 0 followers

            CF

            Cesar Ferri

            Speaker · 0 followers

            JHO

            José H. Orallo

            Speaker · 0 followers

            About

            Large language models have recently shown a remarkable ability for few-shot learning, including patterns of algorithmic nature. It is now time to ask what kind of patterns these models can capture and how many examples they need in their prompts. We frame this question as a teaching problem with strong priors, and study whether language models can identify simple algorithmic concepts from small witness sets. In particular, we explore how several GPT architectures, program induction and humans pe…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Loss function based second-order Jensen inequality and its application to particle variational inference
            14:09

            Loss function based second-order Jensen inequality and its application to particle variational inference

            Futoshi Futami, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Predicting Atlantic Multidecadal Variability
            09:02

            Predicting Atlantic Multidecadal Variability

            Glenn Liu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Stochastic Bias-Reduced Gradient Methods
            11:42

            Stochastic Bias-Reduced Gradient Methods

            Yujia Jin, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            TransDreamer: Reinforcement Learning with Transformer World Models
            04:42

            TransDreamer: Reinforcement Learning with Transformer World Models

            Chang Chen, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Generative models, inference and symmetries
            21:59

            Generative models, inference and symmetries

            Danilo J. Rezende, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            AFEC: Active Forgetting of Negative Transfer in Continual Learning
            13:51

            AFEC: Active Forgetting of Negative Transfer in Continual Learning

            Liyuan Wang

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021