Scalable gradient-based tuning of continuous regularization hyperparameters

Jelena Luketina, Mathias Berglund, Klaus Greff, Tapani Raiko

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

5 Citations (Scopus)

Abstract

Hyperparameter selection generally relies on running multiple full training trials, with selection based on validation set performance. We propose a gradient-based approach for locally adjusting hyperparameters during training of the model. Hyperparameters are adjusted so as to make the model parameter gradients, and hence updates, more advantageous for the validation cost. We explore the approach for tuning regularization hyperparameters and find that in experi-ments on MNIST, SVHN and CIFAR-10, the resulting rcgularization levels are within the optimal regions. The additional computational cost depends on how frequently the hyperparameters are trained, but the tested scheme adds only 30% computational overhead regardless of the model size. Since the method is significantly less computationally demanding compared to similar gradient based approaches to hyperparameter optimization, and consistently finds good hyperparameter values, it can be a useful tool for training neural network models.

Original languageEnglish
Title of host publication33rd International Conference on Machine Learning, ICML 2016
Pages4333-4341
Number of pages9
Volume6
ISBN (Electronic)9781510829008
Publication statusPublished - 2016
MoE publication typeA4 Article in a conference publication
EventInternational Conference on Machine Learning - New York, United States
Duration: 19 Jun 201624 Jun 2016
Conference number: 33

Conference

ConferenceInternational Conference on Machine Learning
Abbreviated titleICML
CountryUnited States
CityNew York
Period19/06/201624/06/2016

Fingerprint

Dive into the research topics of 'Scalable gradient-based tuning of continuous regularization hyperparameters'. Together they form a unique fingerprint.

Cite this