Abstract
This chapter presents a Matlab implementation of stream-based virtual microphone directional audio coding (DirAC). It describes how to use first-order DirAC in different applications with different theoretical and practical microphone setups for both loudspeaker and headphone playback. The system performs well if the recorded spatial sound matches the implicit assumptions about the sound field in DirAC, that at each frequency band only a single source is dominant at one time, with a moderate level of reverberation. In cases where the recorded sound field strongly violates these assumptions, audible distortions, artifacts, may occur. The artifacts were pronounced with a high number of loudspeakers, and especially in listening conditions with low reverberation. They are most often due to the temporal and spectral effects from decorrelation processing, which is needed to decrease the level of coherence between loudspeaker signals in the case of a sound field with high diffuseness.
Original language | English |
---|---|
Title of host publication | Parametric Time-Frequency Domain Spatial Audio |
Editors | Ville Pulkki, Symeon Delikaris-Manias, Archontis Politis |
Publisher | WILEY-BLACKWELL |
Pages | 89-140 |
ISBN (Electronic) | 9781119252634 |
ISBN (Print) | 9781119252597 |
DOIs | |
Publication status | Published - 15 Dec 2017 |
MoE publication type | A3 Part of a book or another research book |
Keywords
- decorrelation artifacts
- first-order directional audio coding
- spatial sound
- stream-based virtual microphone