When Worlds Collide: AI-Created, Human-Mediated Video Description Services and the User Experience

Sabine Braun, Kim Starr, Jaleh Delfani, Liisa Tiittula, Jorma Laaksonen, Karel Braeckman, Dieter Van Rijsselbergen, Sasha Lagrillière, Lauri Saarikoski

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

65 Downloads (Pure)

Abstract

This paper reports on a user-experience study undertaken as part of the H2020 project MeMAD (‘Methods for Managing Audiovisual Data: Combining Automatic Efficiency with Human Accuracy’), in which multimedia content describers from the television and archive industries tested Flow, an online platform, designed to assist the post-editing of automatically generated data, in order to enhance the production of archival descriptions of film content. Our study captured the participant experience using screen recordings, the User Experience Questionnaire (UEQ), a benchmarked interactive media questionnaire and focus group discussions, reporting a broadly positive post-editing environment. Users designated the platform’s role in the collation of machine-generated content descriptions, transcripts, named-entities (location, persons, organisations) and translated text as helpful and likely to enhance creative outputs in the longer term. Suggestions for improving the platform included the addition of specialist vocabulary functionality, shot-type detection, film-topic labelling, and automatic music recognition. The limitations of the study are, most notably, the current level of accuracy achieved in computer vision outputs (i.e. automated video descriptions of film material) which has been hindered by the lack of reliable and accurate training data, and the need for a more narratively oriented interface which allows describers to develop their storytelling techniques and build descriptions which fit within a platform-hosted storyboarding functionality. While this work has value in its own right, it can also be regarded as paving the way for the future (semi)automation of audio descriptions to assist audiences experiencing sight impairment, cognitive accessibility difficulties or for whom ‘visionless’ multimedia consumption is their preferred option.
Original languageEnglish
Title of host publicationHCI International 2021 - Late Breaking Papers: Cognition, Inclusion, Learning, and Culture
Subtitle of host publication23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings
EditorsConstantine Stephanidis, Don Harris, Wen-Chin Li, Dylan D. Schmorrow, Cali M. Fidopiastis, Margherita Antona, Qin Gao, Jia Zhou, Panayiotis Zaphiris, Andri Ioannou, Andri Ioannou, Robert A. Sottilare, Jessica Schwarz, Matthias Rauterberg
PublisherSpringer
Pages147-167
Number of pages21
ISBN (Electronic)978-3-030-90328-2
ISBN (Print)978-3-030-90327-5
DOIs
Publication statusPublished - Jul 2021
MoE publication typeA4 Conference publication
EventInternational Conference on Human-Computer Interaction - Virtual, Online
Duration: 24 Jul 202129 Jul 2021
Conference number: 23

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume13096
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceInternational Conference on Human-Computer Interaction
Abbreviated titleHCII
CityVirtual, Online
Period24/07/202129/07/2021

Fingerprint

Dive into the research topics of 'When Worlds Collide: AI-Created, Human-Mediated Video Description Services and the User Experience'. Together they form a unique fingerprint.

Cite this