Abstract
Wave field synthesis (WFS) is a multichannel audio reproduction method, of a considerable computational cost that renders an accurate spatial sound field using a large number of loudspeakers to emulate virtual sound sources. The moving of sound source locations can be improved by using fractional delay filters, and room reflections can be compensated by using an inverse filter bank that corrects the room effects at selected points within the listening area. However, both the fractional delay filters and the room compensation filters further increase the computational requirements of the WFS system. This paper analyzes the performance of a WFS system composed of 96 loudspeakers which integrates both strategies. In order to deal with the large computational complexity, we explore the use of a graphics processing unit (GPU) as a massive signal co-processor to increase the capabilities of the WFS system. The performance of the method as well as the benefits of the GPU acceleration are demonstrated by considering different sizes of room compensation filters and fractional delay filters of order 9. The results show that a 96-speaker WFS system that is efficiently implemented on a state-of-art GPU can synthesize the movements of 94 sound sources in real time and, at the same time, can manage 9216 room compensation filters having more than 4000 coefficients each.
Original language | English |
---|---|
Article number | 7750558 |
Pages (from-to) | 435-447 |
Number of pages | 13 |
Journal | IEEE/ACM Transactions on Audio, Speech, and Language Processing |
Volume | 25 |
Issue number | 2 |
DOIs | |
Publication status | Published - 1 Feb 2017 |
MoE publication type | A1 Journal article-refereed |
Keywords
- Audio systems
- interpolation
- parallel architectures
- parallel processing
- signal synthesis