Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Oral: A Distributed Graph-Theoretic Framework for Automatic Parallelization in Multi-core Systems
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Oral: A Distributed Graph-Theoretic Framework for Automatic Parallelization in Multi-core Systems
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Oral: A Distributed Graph-Theoretic Framework for Automatic Parallelization in Multi-core Systems

            Apr 4, 2021

            Sprecher:innen

            GM

            Guixiang Ma

            Sprecher:in · 0 Follower:innen

            YX

            Yao Xiao

            Sprecher:in · 0 Follower:innen

            TW

            Theodore Willke

            Sprecher:in · 0 Follower:innen

            Über

            The rapid demand for memory and computational resources by the emerging complex applications requires multi-core parallel systems capable to scale the execution of these applications. In this paper, we propose a distributed graph-theoretic framework for automatic parallelization in multi-core systems, where the goal is to minimize the data communication while accounting for intrinsic functional interdependence and balancing the workload among cores to improve the overall performance. Specificall…

            Organisator

            M2
            M2

            MLSys 2021

            Konto · 159 Follower:innen

            Kategorien

            KI und Datenwissenschaft

            Kategorie · 10,8k Präsentationen

            Über MLSys 2021

            The Conference on Machine Learning and Systems targets research at the intersection of machine learning and systems. The conference aims to elicit new connections amongst these fields, including identifying best practices and design principles for learning systems, as well as developing novel learning methods and theory tailored to practical machine learning workflows.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Oral: Doping: A technique for Extreme Compression of LSTM Models using Sparse Structured Additive Matrices
            19:07

            Oral: Doping: A technique for Extreme Compression of LSTM Models using Sparse Structured Additive Matrices

            Urmish Thakker, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Oral: Scaling Distributed Training with Adaptive Summation
            18:20

            Oral: Scaling Distributed Training with Adaptive Summation

            Saeed Maleki, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            A Deep Learning Based Cost Model for Automatic Code Optimization
            05:18

            A Deep Learning Based Cost Model for Automatic Code Optimization

            Riyadh Baghdadi, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Pipelined Backpropagation at Scale: Training Large Models without Batches
            04:14

            Pipelined Backpropagation at Scale: Training Large Models without Batches

            Atli Kosson, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Deploying Deep Learning Applications on FPGA: Experiences and Learnings
            09:36

            Deploying Deep Learning Applications on FPGA: Experiences and Learnings

            Ashwin Krishnan, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            GNNs for Charged Particle Reconstruction at the Large Hadron Collider
            39:13

            GNNs for Charged Particle Reconstruction at the Large Hadron Collider

            Savannah Thais

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? MLSys 2021 folgen