Další
Živý přenos začne již brzy!
Živý přenos již skončil.
Prezentace ještě nebyla nahrána!
  • title: Make Some Noise: Reliable and Efficient Single-Step Adversarial Training
      0:00 / 0:00
      • Nahlásit chybu
      • Nastavení
      • Playlisty
      • Záložky
      • Titulky Off
      • Rychlost přehrávání
      • Kvalita
      • Nastavení
      • Debug informace
      • Server sl-yoda-v2-stream-004-alpha.b-cdn.net
      • Velikost titulků Střední
      • Záložky
      • Server
      • sl-yoda-v2-stream-004-alpha.b-cdn.net
      • sl-yoda-v2-stream-004-beta.b-cdn.net
      • 1685195716.rsc.cdn77.org
      • 1239898752.rsc.cdn77.org
      • Titulky
      • Off
      • English
      • Rychlost přehrávání
      • Kvalita
      • Velikost titulků
      • Velké
      • Střední
      • Malé
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      Moje playlisty
        Záložky
          00:00:00
            Make Some Noise: Reliable and Efficient Single-Step Adversarial Training
            • Nastavení
            • Sync diff
            • Kvalita
            • Nastavení
            • Server
            • Kvalita
            • Server

            Make Some Noise: Reliable and Efficient Single-Step Adversarial Training

            28. listopadu 2022

            Řečníci

            PdJ

            Pau de Jorge

            Sprecher:in · 0 Follower:innen

            AB

            Adel Bibi

            Sprecher:in · 0 Follower:innen

            RV

            Riccardo Volpi

            Sprecher:in · 0 Follower:innen

            O prezentaci

            Recently, Wong et al. (2020) showed that adversarial training with single-step FGSM leads to a characteristic failure mode named catastrophic overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks. Experimentally they showed that simply adding a random perturbation prior to FGSM (RS-FGSM) could prevent CO. However, Andriushchenko Flammarion (2020) observed that RS-FGSM still leads to CO for larger perturbations, and proposed a computationally expensive regularizer (…

            Organizátor

            N2
            N2

            NeurIPS 2022

            Konto · 960 Follower:innen

            Baví vás formát? Nechte SlidesLive zachytit svou akci!

            Profesionální natáčení a streamování po celém světě.

            Sdílení

            Doporučená videa

            Prezentace na podobné téma, kategorii nebo přednášejícího

            Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
            09:56

            Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost

            Sungjun Cho, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Peptide-MHC Structure Prediction With Mixed Residue and Atom Graph Neural Network
            02:16

            Peptide-MHC Structure Prediction With Mixed Residue and Atom Graph Neural Network

            Antoine Delaunay, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Open High-Resolution Satellite Imagery: The WorldStrat Dataset -- With Application to Super-Resolution
            04:52

            Open High-Resolution Satellite Imagery: The WorldStrat Dataset -- With Application to Super-Resolution

            Julien Cornebise, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            How and Why to Manipulate Your Own Agent: On the Incentives of Users of Learning Agents
            05:00

            How and Why to Manipulate Your Own Agent: On the Incentives of Users of Learning Agents

            Yoav Kolumbus, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Generalization Error Bounds on Deep Learning with Markov Datasets
            04:44

            Generalization Error Bounds on Deep Learning with Markov Datasets

            Lan V. Truong

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Lifelong Neural Predictive Coding: Learning Cumulatively Online without Forgetting
            04:56

            Lifelong Neural Predictive Coding: Learning Cumulatively Online without Forgetting

            Alex Ororbia, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Zajímají Vás podobná videa? Sledujte NeurIPS 2022