2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression

Dec 10, 2023

Speakers

About

We consider distributed convex optimization problems in the regime when the communication between the server and the workers is expensive in both uplink and downlink directions. We develop a new and provably accelerated method, which we call 2Direction, based on fast bidirectional compressed communication and a new bespoke error-feedback mechanism which may be of independent interest. Indeed, we find that the EF and EF21-P mechanisms (Seide et al., 2014; Gruntkowska et al., 2023) that have considerable success in the design of efficient non-accelerated methods are not appropriate for accelerated methods. In particular, we prove that 2Direction improves the previous state-of-the-art communication complexity Θ(K ×(L/αμ + L_maxω/n μ + ω)) (Gruntkowska et al., 2023) to Θ(K × (√(L (ω + 1)/αμ) + √(L_maxω^2/n μ) + 1/α + ω)) in the μ–strongly-convex setting, where L and L_max are smoothness constants, n is # of workers, ω and α are compression errors of the RandK and TopK sparsifiers (as examples), K is # of coordinates/bits that the server and workers send to each other. Moreover, our method is the first that improves upon the communication complexity of the vanilla accelerated gradient descent method (AGD). We obtain similar improvements in the general convex regime as well. Finally, our theoretical findings are corroborated by experimental evidence.

Organizer

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2023