Stochastic gradient MCMC methods, such as stochastic gradient Langevin dynamics (SGLD), enable large-scale posterior inference by leveraging noisy but cheap gradient estimates. However, when federated data are non-IID, the variance of distributed gradient estimates is amplified compared to its centralized version, and delayed communication rounds lead chains to diverge from the target posterior. In this work, we introduce the concept of conducive gradients, zero-mean stochastic gradients that serve as a mechanism for sharing probabilistic information between data shards. We propose a novel stochastic gradient estimator that incorporates the conducive gradients, and we show that it improves convergence on federated data when compared to distributed SGLD (DSGLD). We evaluate, conducive gradient DSGLD (CG-DSGLD) on metric learning and deep MLPs tasks. Experiments show that it outperforms standard DSGLD for non-IID federated data.
TilaJätetty - 2021
OKM-julkaisutyyppiB1 Artikkeli tiedelehdessä


Sukella tutkimusaiheisiin 'Distributed stochastic gradient MCMC for federated learning'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä