Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-004-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-004-alpha.b-cdn.net
      • sl-yoda-v2-stream-004-beta.b-cdn.net
      • 1685195716.rsc.cdn77.org
      • 1239898752.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

            Dec 6, 2022

            Speakers

            YL

            Yuanxin Liu

            Speaker · 0 followers

            FM

            Fandong Meng

            Speaker · 0 followers

            ZL

            Zheng Lin

            Speaker · 0 followers

            About

            Despite the remarkable success of pre-trained language models (PLMs), they still face two challenges: First, large-scale PLMs are inefficient in terms of memory footprint and computation. Second, on the downstream tasks, PLMs tend to rely on the dataset bias and struggle to generalize to out-of-distribution (OOD) data. In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance. Such subnetworks can be found i…

            Organizer

            N2
            N2

            NeurIPS 2022

            Account · 952 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Data Science & Inequalities
            08:37

            Data Science & Inequalities

            Kamaldeep Bhui

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            [Re] Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation
            04:47

            [Re] Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation

            Aryan Mehta, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            D2NeRF: Self-Supervised Decoupling of Dynamic and Static Objects from a Monocular Video
            04:13

            D2NeRF: Self-Supervised Decoupling of Dynamic and Static Objects from a Monocular Video

            Tianhao Wu, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Grow and Merge: A Unified Framework for Continuous Categories Discovery
            04:42

            Grow and Merge: A Unified Framework for Continuous Categories Discovery

            Xinwei Zhang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Locally Hierarchical Auto-Regressive Modeling for Image Generation
            04:03

            Locally Hierarchical Auto-Regressive Modeling for Image Generation

            Tackgeun You, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Optimal Positive Generation via Latent Transformation for Contrastive Learning
            04:40

            Optimal Positive Generation via Latent Transformation for Contrastive Learning

            Yinqi Li, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2022