First-Order Directional Audio Coding (DirAC)

Ville Pulkki, Archontis Politis, Mikko-Ville Laitinen, Juha Vilkamo, Jukka Ahonen

Research output: Chapter in Book/Report/Conference proceedingChapterScientificpeer-review

8 Citations (Scopus)

Abstract

This chapter presents a Matlab implementation of stream-based virtual microphone directional audio coding (DirAC). It describes how to use first-order DirAC in different applications with different theoretical and practical microphone setups for both loudspeaker and headphone playback. The system performs well if the recorded spatial sound matches the implicit assumptions about the sound field in DirAC, that at each frequency band only a single source is dominant at one time, with a moderate level of reverberation. In cases where the recorded sound field strongly violates these assumptions, audible distortions, artifacts, may occur. The artifacts were pronounced with a high number of loudspeakers, and especially in listening conditions with low reverberation. They are most often due to the temporal and spectral effects from decorrelation processing, which is needed to decrease the level of coherence between loudspeaker signals in the case of a sound field with high diffuseness.
Original languageEnglish
Title of host publicationParametric Time-Frequency Domain Spatial Audio
EditorsVille Pulkki, Symeon Delikaris-Manias, Archontis Politis
PublisherWILEY-BLACKWELL
Pages89-140
ISBN (Electronic)9781119252634
ISBN (Print)9781119252597
DOIs
Publication statusPublished - 15 Dec 2017
MoE publication typeA3 Part of a book or another research book

Keywords

  • decorrelation artifacts
  • first-order directional audio coding
  • spatial sound
  • stream-based virtual microphone

Fingerprint

Dive into the research topics of 'First-Order Directional Audio Coding (DirAC)'. Together they form a unique fingerprint.

Cite this