Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization

Umut Simsekli, Cagatay Yildiz, Thanh Huy Nguyen, Gael Richard, Ali Taylan Cemgil

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

10 Downloads (Pure)

Abstract

Recent studies have illustrated that stochastic gradient Markov Chain Monte Carlo techniques have a strong potential in non-convex optimization, where local and global convergence guarantees can be shown under certain conditions. By building up on this recent theory, in this study, we develop an asynchronous-parallel stochastic L-BFGS algorithm for non-convex optimization. The proposed algorithm is suitable for both distributed and shared-memory settings. We provide formal theoretical analysis and show that the proposed method achieves an ergodic convergence rate of O(1/√N) (N being the total number of iterations) and it can achieve a linear speedup under certain conditions. We perform several experiments on both synthetic and real datasets. The results support our theory and show that the proposed algorithm provides a significant speedup over the recently proposed synchronous distributed L-BFGS algorithm.
Original languageEnglish
Title of host publicationProceedings of the 35th International Conference on Machine Learning
Pages4681-4690
Publication statusPublished - 2018
MoE publication typeA4 Article in a conference publication
EventInternational Conference on Machine Learning - Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018
Conference number: 35

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume80
ISSN (Electronic)1938-7228

Conference

ConferenceInternational Conference on Machine Learning
Abbreviated titleICML
Country/TerritorySweden
CityStockholm
Period10/07/201815/07/2018

Fingerprint

Dive into the research topics of 'Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization'. Together they form a unique fingerprint.

Cite this