Robust and Responsive Acoustic Pairing of Devices Using Decorrelating Time-Frequency Modelling

Pablo Perez Zarazaga, Tom Bäckström, Stephan Sigg

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

2 Citations (Scopus)
177 Downloads (Pure)


Voice user interfaces have increased in popularity, as they enable natural interaction with different applications using one’s voice. To improve their usability and audio quality, several devices could interact to provide a unified voice user interface. However, with devices cooperating and sharing voice-related information, user privacy may be at risk. Therefore, access management rules that preserve user privacy are important. State-of-the-art methods for acoustic pairing of devices provide fingerprinting based on the time-frequency representation of the acoustic signal and error-correction. We propose to use such acoustic fingerprinting to authorise devices which are acoustically close. We aim to obtain fingerprints of ambient audio adapted to the requirements of voice user interfaces. Our experiments show that the responsiveness and robustness is improved by combining overlapping windows and decorrelating transforms.
Original languageEnglish
Title of host publicationEuropean Signal Processing Conference
ISBN (Electronic)978-9-0827-9703-9
Publication statusPublished - 2019
MoE publication typeA4 Conference publication
EventEuropean Signal Processing Conference - Coruna, Spain
Duration: 2 Sept 20196 Sept 2019

Publication series

NameEuropean Signal Processing Conference
ISSN (Print)2219-5491
ISSN (Electronic)2076-1465


ConferenceEuropean Signal Processing Conference
Abbreviated titleEUSIPCO


  • Voice user interface
  • Acoustic Pairing
  • Audio Fingerprint
  • DCT


Dive into the research topics of 'Robust and Responsive Acoustic Pairing of Devices Using Decorrelating Time-Frequency Modelling'. Together they form a unique fingerprint.

Cite this