Generalized Accelerated Optimization Framework for Big Data Processing

Endrit Dosti

Research output: ThesisDoctoral ThesisCollection of Articles

Abstract

Large-scale optimization problems arise in different fields of engineering and science. Due to the large number of parameters and different structures that these problems can have black-box first-order methods are widely used in solving them. Among the existing firstorder methods, the ones that are most widely used are different variants of Fast Gradient Methods (FGM). Such methods are devised in the context of the estimating sequences framework and exhibit desirable properties such as fast convergence rate and low per iteration complexity. In this Thesis, we devise new estimating sequences and show that they can be used to construct accelerated first-order methods. We start by considering the simplest case, i.e., minimizing smooth and convex objective functions. For this class of problems, we present a class of generalized estimating sequences, constructed by exploiting the history of the estimating functions that are obtained during the minimization process. Using these generalized estimating sequences, we devise a new accelerated gradient method and prove that it converges to a tolerated neighborhood of the optimal solution faster than FGM and other first-order methods. We then consider a more general class of optimization problems, namely composite objectives. For this class of problems, we introduce the class of composite estimating sequences, which are obtained by making use of the gradient mapping framework and a tight lower bound on the function that should be minimized. Using these composite estimating sequences, we devise a composite objective accelerated multi-step estimating sequence technique and prove its accelerated convergence rate. Last, embedding the memory term coming from the previous iterates into the composite estimating sequences, we obtain the generalized composite estimating sequences. Using these estimating sequences, we construct another accelerated gradient method and prove its accelerated convergence rate. The methods devised for solving composite objective functions that we introduce in this thesis are also equipped with efficient backtracking line-search strategies, which enable more accurate estimates of the step-size. Our results are validated by a large number of computational experiments on different types of loss functions, wherein both simulated and publicly available real-world datasets are considered. Our numerical experiments also highlight the robustness of our newly introduced methods to the usage of inexact values for of the Lipschitz constant and the strong convexity parameter.
Translated title of the contributionGeneralized Accelerated Optimization Framework for Big Data Processing
Original languageEnglish
QualificationDoctor's degree
Awarding Institution
  • Aalto University
Supervisors/Advisors
  • Vorobyov, Sergiy, Supervising Professor
  • Charalambous, Themistoklis, Thesis Advisor
Publisher
Print ISBNs978-952-64-1968-8
Electronic ISBNs978-952-64-1969-5
Publication statusPublished - 2024
MoE publication typeG5 Doctoral dissertation (article)

Keywords

  • big data
  • fast gradient methods
  • FGM

Fingerprint

Dive into the research topics of 'Generalized Accelerated Optimization Framework for Big Data Processing'. Together they form a unique fingerprint.

Cite this