Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization

Umut Simsekli, Cagatay Yildiz, Thanh Huy Nguyen, Gael Richard, Ali Taylan Cemgil

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference contributionScientificvertaisarvioitu

9 Lataukset (Pure)

Abstrakti

Recent studies have illustrated that stochastic gradient Markov Chain Monte Carlo techniques have a strong potential in non-convex optimization, where local and global convergence guarantees can be shown under certain conditions. By building up on this recent theory, in this study, we develop an asynchronous-parallel stochastic L-BFGS algorithm for non-convex optimization. The proposed algorithm is suitable for both distributed and shared-memory settings. We provide formal theoretical analysis and show that the proposed method achieves an ergodic convergence rate of O(1/√N) (N being the total number of iterations) and it can achieve a linear speedup under certain conditions. We perform several experiments on both synthetic and real datasets. The results support our theory and show that the proposed algorithm provides a significant speedup over the recently proposed synchronous distributed L-BFGS algorithm.
AlkuperäiskieliEnglanti
OtsikkoProceedings of the 35th International Conference on Machine Learning
Sivut4681-4690
TilaJulkaistu - 2018
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisuussa
TapahtumaINTERNATIONAL CONFERENCE ON MACHINE LEARNING - Stockholm, Ruotsi
Kesto: 10 heinäkuuta 201815 heinäkuuta 2018
Konferenssinumero: 35

Julkaisusarja

NimiProceedings of Machine Learning Research
KustantajaPMLR
Vuosikerta80
ISSN (elektroninen)1938-7228

Conference

ConferenceINTERNATIONAL CONFERENCE ON MACHINE LEARNING
LyhennettäICML
MaaRuotsi
KaupunkiStockholm
Ajanjakso10/07/201815/07/2018

Sormenjälki Sukella tutkimusaiheisiin 'Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä