Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-002-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-002-alpha.b-cdn.net
      • sl-yoda-v2-stream-002-beta.b-cdn.net
      • 1001562353.rsc.cdn77.org
      • 1075090661.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork

            Dez 10, 2023

            Sprecher:innen

            QG

            Qiang Gao

            Sprecher:in · 0 Follower:innen

            XS

            Xiaojun Shan

            Sprecher:in · 0 Follower:innen

            YZ

            Yuchen Zhang

            Sprecher:in · 0 Follower:innen

            Über

            As there exist competitive subnetworks within a dense network in concert with Lottery Ticket Hypothesis, we introduce a novel neuron-wise task incremental learning method, namely Data-free Subnetworks (DSN), which attempts to enhance the elastic knowledge transfer across the tasks that sequentially arrive. Specifically, DSN primarily seeks to transfer knowledge to the new coming task from the learned tasks by selecting the affiliated weights of a small set of neurons to be activated, including t…

            Organisator

            N2
            N2

            NeurIPS 2023

            Konto · 648 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Provable Advantage of Curriculum Learning on Parity Targets with Mixed Inputs
            04:57

            Provable Advantage of Curriculum Learning on Parity Targets with Mixed Inputs

            Emmanuel Abbe, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Adaptive Normalization for Non-stationary Time Series Forecasting: A Temporal Slice Perspective
            04:49

            Adaptive Normalization for Non-stationary Time Series Forecasting: A Temporal Slice Perspective

            Zhiding Liu, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            LD2: Scalable Heterophilous GNN with Decoupled Embedding
            04:39

            LD2: Scalable Heterophilous GNN with Decoupled Embedding

            Ningyi Liao, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            rapid learning without catastrophic forgetting in the morris water maze
            06:17

            rapid learning without catastrophic forgetting in the morris water maze

            Raymond Wang

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Simplicity Bias in 1-Hidden Layer Neural Networks
            04:21

            Simplicity Bias in 1-Hidden Layer Neural Networks

            Depen Morwani, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2023 folgen