Privacy in Speech Communication Technology

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaAbstractScientific


Speech technology has become increasingly popular as many users find such technologies desirable and useful. However, as more and more devices and cloud services have access to everything we say, they expose us to breaches in privacy, also in ways we do not yet fully understand. Poorly managed privacy in speech technology is however with a high likelihood going to cause severe problems, similar in magnitude to the Cambridge Analytica scandal in social media [1]. To preempt such problems and to develop speech communication technology which is easy to use also with regard to privacy, we need to both understand what privacy means for individual users and society, as well as develop technology to support the users’ needs and assumptions about privacy. User-studies about privacy with respect to speech communication technology are however not straightforward to implement. People are not educated about the potential risks and detrimental consequences of breaches in the privacy of their devices and services, and fake news and scaremongering in the popular media further distort people’s perceptions of privacy. In fact, even experts in the field find it challenging to predict the full range of consequences. For example, if a speech-operated service identifies indications of domestic violence, to which extent and when is the privacy of the users more important than their physical safety? Asking the users’ opinion about the privacy of speech communication technology is therefore difficult to implement or it can be even entirely useless. As a first step in evidence-based designs of privacy, instead, we are therefore researching the perception of privacy in human-to-human interaction [2]; if devices would understand how people perceive and react to their privacy, then devices could be respectful of privacy and be designed to behave in predictable and useful ways. To this end, we have recorded discussions in different acoustic environment and asked the subjects to rate their experience of privacy with a questionnaire. Our objective is to, in a subsequent step, use machine learning methods to assess acoustic environments to predict users’ expectations of privacy. In the long-term, we want to use that information to adapt the privacy-level of speech technology dynamically to different environments. A second approach is to build, ground-up, methods which are needed for privacy-respecting speech technology. In particular, we need privacy-preserving authentication methods. In comparison, for authentication in interaction between humans, social conventions state that people who can hear you speak are allowed to hear it. Access management between devices can correspondingly depend on whether they hear the same signal, which we implement with an acoustic fingerprint and a cryptographic handshake [3]. These examples highlight our aim of developing speech communication technology, which finds a balance between usability and privacy. It is a new field in speech research, which has recently started to gain attention.
1. The Guardian, "The Cambridge Analytica Files", cambridge-analytica-files, accessed
2. Zarazaga, P. P., Das, S., Bäckström, T., Raju, V. V., and Vuppala, A. K. "Sound Privacy: A Conversational Speech Corpus for
Quantifying the Experience of Privacy", Proc Interspeech, 2019.
3. Zarazaga, Pablo Pérez, Tom Bäckström and Stephan Sigg. "Robust and Responsive Acoustic Pairing of Devices Using Decorre-
lating Time-Frequency Modelling." 2019 27th European Signal Processing Conference (EUSIPCO). IEEE, 2019.
TilaJulkaistu - 19 elokuuta 2021
OKM-julkaisutyyppiEi oikeutettu
TapahtumaFonetiikan päivät - Phonetics Symposium - Virtual, Online, Suomi
Kesto: 19 elokuuta 202120 elokuuta 2021
Konferenssinumero: 34


ConferenceFonetiikan päivät - Phonetics Symposium
KaupunkiVirtual, Online


Sukella tutkimusaiheisiin 'Privacy in Speech Communication Technology'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä