In this letter, we present a generic and computationally efficient method for coupled dictionary learning (CDL). The proposed method enforces relations between the corresponding atoms of dictionaries learned to represent two related (but not necessarily of the same dimensionality) feature spaces, aiming that each pair of related signals from the two feature spaces has the same sparse representation with respect to their corresponding learned dictionaries. Coupled learned dictionaries have various applications in many sparse representation-based recognition and reconstruction problems, where the two related feature spaces are representing the same signal of different modalities or different qualities. The presented experimental comparisons show that the results obtained using our proposed CDL method are competitive to those of the state-of-the-art CDL methods in performance, while the proposed method has a significantly lower computational cost. Furthermore, the proposed method can be straightforwardly used for learning coupled dictionaries from more than two related feature spaces.
|Journal||IEEE Signal Processing Letters|
|Early online date||2019|
|Publication status||Published - Oct 2019|
|MoE publication type||A1 Journal article-refereed|
- Dictionaries, Machine learning, Signal processing algorithms, Encoding, Complexity theory, Optimization, Image resolution, Coupled dictionary learning, feature space learning, sparse representation