Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-005-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-005-alpha.b-cdn.net
      • sl-yoda-v2-stream-005-beta.b-cdn.net
      • 1034628162.rsc.cdn77.org
      • 1409346856.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies

            Jul 24, 2023

            Speakers

            GA

            Gati Aher

            Sprecher:in · 0 Follower:innen

            ATK

            Adam Tauman Kalai

            Sprecher:in · 0 Follower:innen

            RIA

            Rosa I. Arriaga

            Sprecher:in · 0 Follower:innen

            About

            We introduce a new type of test, called a Turing Experiment (TE), for evaluating how well a language model, such as GPT-3, can simulate different aspects of human behavior. Unlike the Turing Test, which involves simulating a single arbitrary individual, a TE requires simulating a representative sample of participants in human subject research. We give TEs that attempt to replicate well-established findings in prior studies. We design a methodology for simulating TEs and illustrate its use to com…

            Organizer

            I2
            I2

            ICML 2023

            Konto · 657 Follower:innen

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning
            05:21

            Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning

            Yu Meng, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Tensor Decompositions Meet Control Theory: Learning General Mixtures of Linear Dynamical Systems
            04:59

            Tensor Decompositions Meet Control Theory: Learning General Mixtures of Linear Dynamical Systems

            Allen Liu, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Why Is Public Pretraining Necessary for Private Model Training?
            05:21

            Why Is Public Pretraining Necessary for Private Model Training?

            Arun Ganesh, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Self-supervised Neural Factor Analysis for Disentangle Speech Representations
            04:18

            Self-supervised Neural Factor Analysis for Disentangle Speech Representations

            Weiwei Lin

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations
            05:08

            RSC: Accelerating Graph Neural Networks Training via Randomized Sparse Computations

            Zirui Liu, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Closing Remarks
            04:52

            Closing Remarks

            Cameron Park

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interested in talks like this? Follow ICML 2023