Hybrid modeling of acoustics for virtual reality audio engines

Project Details


In virtual reality techniques, the perception of physical presence in locations elsewhere in the real world or in imaginary worlds is created for a listener. The tasks for a virtual audio engine are in the simulation of listener's ear canal signals that would occur if the virtual world was real. The traditional approaches include the simulation of discrete sound paths or wave fields, which lead into complex models already for simple room geometries. In this project we develop perceptually accurate machine-learning-based acoustical modeling methods for virtual reality audio engines. The rendered audio is changed dynamically as the user controls the avatar in virtual reality. To enable this, novel methods for efficient rendering of scattering and 3D reverberation effects are developed using machine learning. If successful, this would provide a major breakthrough in the realism of audio rendering in dynamic virtual reality.
Effective start/end date01/09/201831/08/2022

Collaborative partners


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.