In virtual reality techniques, the perception of physical presence in locations elsewhere in the real world or in imaginary worlds is created for a listener. The tasks for a virtual audio engine are in the simulation of listener's ear canal signals that would occur if the virtual world was real. The traditional approaches include the simulation of discrete sound paths or wave fields, which lead into complex models already for simple room geometries. In this project we develop perceptually accurate machine-learning-based acoustical modeling methods for virtual reality audio engines. The rendered audio is changed dynamically as the user controls the avatar in virtual reality. To enable this, novel methods for efficient rendering of scattering and 3D reverberation effects are developed using machine learning. If successful, this would provide a major breakthrough in the realism of audio rendering in dynamic virtual reality.