Projects per year
Abstract
In the rapidly evolving internet-of-things (IoT) ecosystem, effective data analysis techniques are crucial for handling distributed data generated by sensors. Addressing the limitations of existing methods, such as the sub-gradient approach, which fails to distinguish between active and non-active coefficients effectively, this paper introduces the decentralized smoothing alternating direction method of multipliers (DSAD) for penalized quantile regression. Our method leverages non-convex sparse penalties like the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD), improving the identification and retention of significant predictors. DSAD incorporates a total variation norm within a smoothing ADMM framework, achieving consensus among distributed nodes and ensuring uniform model performance across disparate data sources. This approach overcomes traditional convergence challenges associated with non-convex penalties in decentralized settings. We present convergence proof and extensive simulation results to validate the effectiveness of the DSAD, demonstrating its superiority in achieving reliable convergence and enhancing estimation accuracy compared with prior methods.
Original language | English |
---|---|
Pages (from-to) | 1915-1919 |
Number of pages | 5 |
Journal | IEEE Signal Processing Letters |
Volume | 32 |
DOIs | |
Publication status | Published - 2025 |
MoE publication type | A1 Journal article-refereed |
Keywords
- Distributed learning
- non-convex and non-smooth sparse penalties
- quantile regression
- weak convexity
Fingerprint
Dive into the research topics of 'Decentralized Smoothing ADMM for Quantile Regression with Non-Convex Sparse Penalties'. Together they form a unique fingerprint.Projects
- 1 Active
-
PLEDGE: Personalized online Learning on the Edge in IoT/CPS
Werner, S. (Principal investigator)
01/09/2023 → 31/08/2027
Project: RCF Academy Project