Object-Based Six-Degrees-of-Freedom Rendering of Sound Scenes Captured with Multiple Ambisonic Receivers

Leo McCormack, Archontis Politis, Thomas McKenzie, Christoph Hold, Ville Pulkki

Research output: Contribution to journalArticleScientificpeer-review

19 Citations (Scopus)
76 Downloads (Pure)

Abstract

This article proposes a system for object-based six-degrees-of-freedom (6DoF) rendering of spatial sound scenes that are captured using a distributed arrangement of multiple Ambisonic receivers. The approach is based on first identifying and tracking the positions of sound sources within the scene, followed by the isolation of their signals through the use of beamformers. These sound objects are subsequently spatialized over the target playback setup, with respect to both the head orientation and position of the listener. The diffuse ambience of the scene is rendered separately by first spatially subtracting the source signals from the receivers located nearest to the listener position. The resultant residual Ambisonic signals are then spatialized, decorrelated, and summed together with suitable interpolation weights. The proposed system is evaluated through an in situ listening test conducted in 6DoF virtual reality, whereby real-world sound sources are compared with the auralization achieved through the proposed rendering method. The results of 15 participants suggest that in comparison to a linear interpolation-based alternative, the proposed object-based approach is perceived as being more realistic.

Original languageEnglish
Pages (from-to)355-372
Number of pages18
JournalAES: Journal of the Audio Engineering Society
Volume70
Issue number5
DOIs
Publication statusPublished - May 2022
MoE publication typeA1 Journal article-refereed

Fingerprint

Dive into the research topics of 'Object-Based Six-Degrees-of-Freedom Rendering of Sound Scenes Captured with Multiple Ambisonic Receivers'. Together they form a unique fingerprint.

Cite this