Localized Lasso for High-Dimensional Regression

Makoto Yamada, Takeuchi Koh, Tomoharu Iwata, John Shawe-Taylor, Samuel Kaski

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

15 Downloads (Pure)


We introduce the localized Lasso, which learns models that both are interpretable and have a high predictive power in problems with high dimensionality d and small sample size n. More specifically, we consider a function defined by local sparse models, one at each data point. We introduce sample-wise network regularization to borrow strength across the models, and sample-wise exclusive group sparsity (a.k.a., l12 norm) to introduce diversity into the choice of feature sets in the local models. The local models are interpretable in terms of similarity of their sparsity patterns. The cost function is convex, and thus has a globally optimal solution. Moreover, we propose a simple yet efficient iterative least-squares based optimization procedure for the localized Lasso, which does not need a tuning parameter, and is guaranteed to converge to a globally optimal solution. The solution is empirically shown to outperform alternatives for both simulated and genomic personalized/precision medicine data.
Original languageEnglish
Title of host publicationProceedings of the 20th International Conference on Artificial Intelligence and Statistics
EditorsAarti Singh, Jerry Zhu
Place of PublicationFort Lauderdale, FL, USA
Number of pages9
Publication statusPublished - 1 Aug 2017
MoE publication typeA4 Article in a conference publication
EventInternational Conference on Artificial Intelligence and Statistics - Hyatt Pier 66 Hotel, Fort Lauderdale, United States
Duration: 20 Apr 201722 Apr 2017
Conference number: 20

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)1938-7228


ConferenceInternational Conference on Artificial Intelligence and Statistics
Abbreviated titleAISTATS
CountryUnited States
CityFort Lauderdale

Fingerprint Dive into the research topics of 'Localized Lasso for High-Dimensional Regression'. Together they form a unique fingerprint.

Cite this