Další
Živý přenos začne již brzy!
Živý přenos již skončil.
Prezentace ještě nebyla nahrána!
  • title: How can classical multidimensional scaling go wrong?
      0:00 / 0:00
      • Nahlásit chybu
      • Nastavení
      • Playlisty
      • Záložky
      • Titulky Off
      • Rychlost přehrávání
      • Kvalita
      • Nastavení
      • Debug informace
      • Server sl-yoda-v3-stream-016-alpha.b-cdn.net
      • Velikost titulků Střední
      • Záložky
      • Server
      • sl-yoda-v3-stream-016-alpha.b-cdn.net
      • sl-yoda-v3-stream-016-beta.b-cdn.net
      • 1504562137.rsc.cdn77.org
      • 1896834465.rsc.cdn77.org
      • Titulky
      • Off
      • English
      • Rychlost přehrávání
      • Kvalita
      • Velikost titulků
      • Velké
      • Střední
      • Malé
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      Moje playlisty
        Záložky
          00:00:00
            How can classical multidimensional scaling go wrong?
            • Nastavení
            • Sync diff
            • Kvalita
            • Nastavení
            • Server
            • Kvalita
            • Server

            How can classical multidimensional scaling go wrong?

            6. prosince 2021

            Řečníci

            RS

            Rishi Sonthalia

            Sprecher:in · 0 Follower:innen

            GVB

            Greg Van Buskirk

            Sprecher:in · 0 Follower:innen

            BR

            Benjamin Raichel

            Sprecher:in · 0 Follower:innen

            O prezentaci

            Given a matrix D describing the pairwise dissimilarities of a data set, a common task is to embed the data points into Euclidean space. The classical multidimensional scaling (cMDS) algorithm is a widespread method to do this. However, theoretical analysis of the robustness of the algorithm and an in-depth analysis of its performance on non-Euclidean metrics is lacking. In this paper, we derive a formula, based on the eigenvalues of a matrix obtained from D, for the Frobenius norm of the differe…

            Organizátor

            N2
            N2

            NeurIPS 2021

            Konto · 1,9k Follower:innen

            O organizátorovi (NeurIPS 2021)

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Baví vás formát? Nechte SlidesLive zachytit svou akci!

            Profesionální natáčení a streamování po celém světě.

            Sdílení

            Doporučená videa

            Prezentace na podobné téma, kategorii nebo přednášejícího

            Just Mix Once: Mixing Samples with Implicit Group Distribution
            04:46

            Just Mix Once: Mixing Samples with Implicit Group Distribution

            Giorgio Giannone, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Recursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks
            15:00

            Recursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks

            R. Lieck, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Learning Robust Hierarchical Patterns of Human Brain across Many fMRI Studies
            13:32

            Learning Robust Hierarchical Patterns of Human Brain across Many fMRI Studies

            Dushyant Sahoo, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Integrating single-cell multi-omic datasets with optimal transport
            28:13

            Integrating single-cell multi-omic datasets with optimal transport

            Pinar Demetci

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Towards a Shared Rubric for Dataset Annotation
            02:06

            Towards a Shared Rubric for Dataset Annotation

            Andrew Greene

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Ground-Truth, Whose Truth?: Examining the Challenges with Annotating Toxic Text Datasets
            01:56

            Ground-Truth, Whose Truth?: Examining the Challenges with Annotating Toxic Text Datasets

            Kofi Arhin, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Zajímají Vás podobná videa? Sledujte NeurIPS 2021