Robust and Responsive Acoustic Pairing of Devices Using Decorrelating Time-Frequency Modelling

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Researchers

Research units

Abstract

Voice user interfaces have increased in popularity, as they enable natural interaction with different applications using one’s voice. To improve their usability and audio quality, several devices could interact to provide a unified voice user interface. However, with devices cooperating and sharing voice-related information, user privacy may be at risk. Therefore, access management rules that preserve user privacy are important. State-of-the-art methods for acoustic pairing of devices provide fingerprinting based on the time-frequency representation of the acoustic signal and error-correction. We propose to use such acoustic fingerprinting to authorise devices which are acoustically close. We aim to obtain fingerprints of ambient audio adapted to the requirements of voice user interfaces. Our experiments show that the responsiveness and robustness is improved by combining overlapping windows and decorrelating transforms.

Details

Original languageEnglish
Title of host publicationEuropean Signal Processing Conference
Publication statusAccepted/In press - 2019
MoE publication typeA4 Article in a conference publication
EventEuropean Signal Processing Conference - Coruna, Spain
Duration: 2 Sep 20196 Sep 2019

Publication series

NameEuropean Signal Processing Conference
ISSN (Print)2219-5491
ISSN (Electronic)2076-1465

Conference

ConferenceEuropean Signal Processing Conference
Abbreviated titleEUSIPCO
CountrySpain
CityCoruna
Period02/09/201906/09/2019

    Research areas

  • Voice user interface, Acoustic Pairing, Audio Fingerprint, DCT

ID: 35815509