TY - GEN
T1 - Spatial Mixup: Directional Loudness Modification as Data Augmentation for Sound Event Localization and Detection
AU - Falcon Perez, Ricardo
AU - Shimada, Kazuki
AU - Koyama, Yuichiro
AU - Takahashi, Shusuke
AU - Mitsufuji, Yuki
PY - 2022/4/27
Y1 - 2022/4/27
N2 - Data augmentation methods have shown great importance in diverse supervised learning problems where labeled data is scarce or costly to obtain. For sound event localization and detection (SELD) tasks several augmentation methods have been proposed, with most borrowing ideas from other domains such as images, speech, or monophonic audio. However, only a few exploit the spatial properties of a full 3D audio scene. We propose Spatial Mixup, as an application of parametric spatial audio effects for data augmentation, which modifies the directional properties of a multi-channel spatial audio signal encoded in the ambisonics domain. Similarly to beamforming, these modifications enhance or suppress signals arriving from certain directions, although the effect is less pronounced. Therefore enabling deep learning models to achieve invariance to small spatial perturbations. The method is evaluated with experiments in the DCASE 2021 Task 3 dataset, where spatial mixup increases performance over a non-augmented baseline, and compares to other well known augmentation methods. Furthermore, combining spatial mixup with other methods greatly improves performance.
AB - Data augmentation methods have shown great importance in diverse supervised learning problems where labeled data is scarce or costly to obtain. For sound event localization and detection (SELD) tasks several augmentation methods have been proposed, with most borrowing ideas from other domains such as images, speech, or monophonic audio. However, only a few exploit the spatial properties of a full 3D audio scene. We propose Spatial Mixup, as an application of parametric spatial audio effects for data augmentation, which modifies the directional properties of a multi-channel spatial audio signal encoded in the ambisonics domain. Similarly to beamforming, these modifications enhance or suppress signals arriving from certain directions, although the effect is less pronounced. Therefore enabling deep learning models to achieve invariance to small spatial perturbations. The method is evaluated with experiments in the DCASE 2021 Task 3 dataset, where spatial mixup increases performance over a non-augmented baseline, and compares to other well known augmentation methods. Furthermore, combining spatial mixup with other methods greatly improves performance.
KW - sound event detection
KW - Spatial Audio
KW - Sound localization
UR - http://www.scopus.com/inward/record.url?scp=85131260375&partnerID=8YFLogxK
U2 - 10.1109/ICASSP43922.2022.9747312
DO - 10.1109/ICASSP43922.2022.9747312
M3 - Conference article in proceedings
SN - 978-1-6654-0541-6
T3 - IEEE International Conference on Acoustics, Speech and Signal Processing
SP - 431
EP - 435
BT - Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing
PB - IEEE
CY - United States
T2 - IEEE International Conference on Acoustics, Speech, and Signal Processing
Y2 - 23 May 2022 through 27 May 2022
ER -