Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Randomization matters How to defend against strong adversarial attacks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-013-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-013-alpha.b-cdn.net
      • sl-yoda-v3-stream-013-beta.b-cdn.net
      • 1668715672.rsc.cdn77.org
      • 1420896597.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Randomization matters How to defend against strong adversarial attacks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Randomization matters How to defend against strong adversarial attacks

            Jul 12, 2020

            Speakers

            RP

            Rafael Pinot

            Speaker · 0 followers

            RE

            Raphael Ettedgui

            Speaker · 0 followers

            GR

            Geovani Rizk

            Speaker · 0 followers

            About

            Is there a classifier that ensures optimal robustness against all adversarial attacks? This paper answers this question by adopting a game-theoretic point of view. We show that adversarial attacks and defenses form an infinite zero-sum game where classical results (e.g. Nash or Sion theorems) do not apply. We demonstrate the non-existence of a Nash equilibrium in our game when the classifier and the adversary are both deterministic, hence giving a negative answer to the above question in the det…

            Organizer

            I2
            I2

            ICML 2020

            Account · 2.7k followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            Cybersecurity

            Category · 59 presentations

            About ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent
            15:19

            On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent

            Scott Pesme, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            The Power Spherical Distribution
            06:13

            The Power Spherical Distribution

            Wilker Aziz, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Global Concavity and Optimization in a Class of Dynamic Discrete Choice Models
            14:33

            Global Concavity and Optimization in a Class of Dynamic Discrete Choice Models

            Yiding Feng, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Invited talk: From skills to tasks: Reusing & generalizing knowledge for motor control
            49:15

            Invited talk: From skills to tasks: Reusing & generalizing knowledge for motor control

            Nicolas Heess

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Parameter-free Online Optimization - Part 3
            44:00

            Parameter-free Online Optimization - Part 3

            Francesco Orabona, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Calibrated Top-1 Uncertainty estimates for classification by score based models
            05:12

            Calibrated Top-1 Uncertainty estimates for classification by score based models

            Adam M Oberman, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2020