Decentralized Smoothing ADMM for Quantile Regression with Non-Convex Sparse Penalties

Reza Mirzaeifard*, Diyako Ghaderyan, Stefan Werner

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

Abstract

In the rapidly evolving internet-of-things (IoT) ecosystem, effective data analysis techniques are crucial for handling distributed data generated by sensors. Addressing the limitations of existing methods, such as the sub-gradient approach, which fails to distinguish between active and non-active coefficients effectively, this paper introduces the decentralized smoothing alternating direction method of multipliers (DSAD) for penalized quantile regression. Our method leverages non-convex sparse penalties like the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD), improving the identification and retention of significant predictors. DSAD incorporates a total variation norm within a smoothing ADMM framework, achieving consensus among distributed nodes and ensuring uniform model performance across disparate data sources. This approach overcomes traditional convergence challenges associated with non-convex penalties in decentralized settings. We present convergence proof and extensive simulation results to validate the effectiveness of the DSAD, demonstrating its superiority in achieving reliable convergence and enhancing estimation accuracy compared with prior methods.

Original languageEnglish
Pages (from-to)1915-1919
Number of pages5
JournalIEEE Signal Processing Letters
Volume32
DOIs
Publication statusPublished - 2025
MoE publication typeA1 Journal article-refereed

Keywords

  • Distributed learning
  • non-convex and non-smooth sparse penalties
  • quantile regression
  • weak convexity

Fingerprint

Dive into the research topics of 'Decentralized Smoothing ADMM for Quantile Regression with Non-Convex Sparse Penalties'. Together they form a unique fingerprint.

Cite this