Další
Živý přenos začne již brzy!
Živý přenos již skončil.
Prezentace ještě nebyla nahrána!
  • title: Oral: A Distributed Graph-Theoretic Framework for Automatic Parallelization in Multi-core Systems
      0:00 / 0:00
      • Nahlásit chybu
      • Nastavení
      • Playlisty
      • Záložky
      • Titulky Off
      • Rychlost přehrávání
      • Kvalita
      • Nastavení
      • Debug informace
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Velikost titulků Střední
      • Záložky
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Titulky
      • Off
      • English
      • Rychlost přehrávání
      • Kvalita
      • Velikost titulků
      • Velké
      • Střední
      • Malé
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      Moje playlisty
        Záložky
          00:00:00
            Oral: A Distributed Graph-Theoretic Framework for Automatic Parallelization in Multi-core Systems
            • Nastavení
            • Sync diff
            • Kvalita
            • Nastavení
            • Server
            • Kvalita
            • Server

            Oral: A Distributed Graph-Theoretic Framework for Automatic Parallelization in Multi-core Systems

            4. dubna 2021

            Řečníci

            GM

            Guixiang Ma

            Sprecher:in · 0 Follower:innen

            YX

            Yao Xiao

            Sprecher:in · 0 Follower:innen

            TW

            Theodore Willke

            Sprecher:in · 0 Follower:innen

            O prezentaci

            The rapid demand for memory and computational resources by the emerging complex applications requires multi-core parallel systems capable to scale the execution of these applications. In this paper, we propose a distributed graph-theoretic framework for automatic parallelization in multi-core systems, where the goal is to minimize the data communication while accounting for intrinsic functional interdependence and balancing the workload among cores to improve the overall performance. Specificall…

            Organizátor

            M2
            M2

            MLSys 2021

            Konto · 159 Follower:innen

            Kategorie

            KI und Datenwissenschaft

            Kategorie · 10,8k Präsentationen

            O organizátorovi (MLSys 2021)

            The Conference on Machine Learning and Systems targets research at the intersection of machine learning and systems. The conference aims to elicit new connections amongst these fields, including identifying best practices and design principles for learning systems, as well as developing novel learning methods and theory tailored to practical machine learning workflows.

            Baví vás formát? Nechte SlidesLive zachytit svou akci!

            Profesionální natáčení a streamování po celém světě.

            Sdílení

            Doporučená videa

            Prezentace na podobné téma, kategorii nebo přednášejícího

            Oral: Doping: A technique for Extreme Compression of LSTM Models using Sparse Structured Additive Matrices
            19:07

            Oral: Doping: A technique for Extreme Compression of LSTM Models using Sparse Structured Additive Matrices

            Urmish Thakker, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Oral: Scaling Distributed Training with Adaptive Summation
            18:20

            Oral: Scaling Distributed Training with Adaptive Summation

            Saeed Maleki, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            A Deep Learning Based Cost Model for Automatic Code Optimization
            05:18

            A Deep Learning Based Cost Model for Automatic Code Optimization

            Riyadh Baghdadi, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Pipelined Backpropagation at Scale: Training Large Models without Batches
            04:14

            Pipelined Backpropagation at Scale: Training Large Models without Batches

            Atli Kosson, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Deploying Deep Learning Applications on FPGA: Experiences and Learnings
            09:36

            Deploying Deep Learning Applications on FPGA: Experiences and Learnings

            Ashwin Krishnan, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            GNNs for Charged Particle Reconstruction at the Large Hadron Collider
            39:13

            GNNs for Charged Particle Reconstruction at the Large Hadron Collider

            Savannah Thais

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Zajímají Vás podobná videa? Sledujte MLSys 2021