[ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
-
Updated
Jul 10, 2024 - Python
[ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
vector quantization for stochastic gradient descent.
Simple Implementation of the CVPR 2024 Paper "JointSQ: Joint Sparsification-Quantization for Distributed Learning"
πDistributed optimizer implemented with TensorFlow MPI operation
Add a description, image, and links to the gradient-compression topic page so that developers can more easily learn about it.
To associate your repository with the gradient-compression topic, visit your repo's landing page and select "manage topics."