Machine-learning-based estimation and rendering of scattering in virtual reality

Research output: Contribution to journalArticleScientificpeer-review

Researchers

Research units

  • Technical University of Denmark
  • Norwegian University of Science and Technology

Abstract

In this work, a technique to render the acoustic effect of scattering from finite objects in virtual reality is proposed, which aims to provide a perceptually plausible response for the listener, rather than a physically accurate response. The effect is implemented using parametric filter structures and the parameters for the filters are estimated using artificial neural networks. The networks may be trained with modeled or measured data. The input data consist of a set of geometric features describing a large quantity of source-object-receiver configurations, and the target data consist of the filter parameters computed using measured or modeled data. A proof-of-concept implementation is presented, where the geometric descriptions and computationally modeled responses of three-dimensional plate objects are used for training. In a dynamic test scenario, with a single source and plate, the approach is shown to provide a similar spectrogram when compared with a reference case, although some spectral differences remain present. Nevertheless, it is shown with a perceptual test that the technique produces only a slightly lower degree of plausibility than the state-of-the-art acoustic scattering model that accounts for diffraction, and also that the proposed technique yields a prominently higher degree of plausibility than a model that omits diffraction.

Details

Original languageEnglish
Pages (from-to)2664-2676
Number of pages13
JournalJournal of the Acoustical Society of America
Volume145
Issue number4
Publication statusPublished - 1 Apr 2019
MoE publication typeA1 Journal article-refereed

Download statistics

No data available

ID: 33833681