Machine Learning Based Auralization of Rigid Sphere Scattering

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review


In this paper, we present a method to auralize acoustic scattering and occlusion of a single rigid sphere with parametric filters and neural networks to provide fast processing and estimation of parameters. The filter parameters are estimated using neural networks based on the geometric parameters of the simulated scene, e.g., relative receiver position and size of the rigid spherical scatterer. The modeling differentiates an unoccluded and an occluded source-receiver path, for which different filter structures were used. In contrast to simulating occlusion and scattering numerically or analytically methods, the proposed approach provides rendering with low computational load making it suitable for real-time auralization in virtual reality. The presented method provides a good fit for modeling the acoustic effects of a rigid sphere. Further, a listening test was conducted, which resulted in plausible reproduction of the scattering and occlusion of a rigid sphere.
Original languageEnglish
Title of host publication2021 Immersive and 3D Audio: from Architecture to Automotive (I3DA)
Number of pages8
ISBN (Electronic)978-1-6654-0998-8
ISBN (Print)978-1-6654-0999-5
Publication statusPublished - 10 Sept 2021
MoE publication typeA4 Conference publication
EventInternational Conference on Immersive and 3D Audio - Bologna, Italy
Duration: 8 Sept 202110 Sept 2021


ConferenceInternational Conference on Immersive and 3D Audio
Abbreviated titleI3DA


  • Solid modeling
  • Three-dimensional displays
  • Computational modeling
  • Neural networks
  • Virtual reality
  • Receivers
  • Rendering (computer graphics)


Dive into the research topics of 'Machine Learning Based Auralization of Rigid Sphere Scattering'. Together they form a unique fingerprint.

Cite this