History-based visual mining of semi-structured audio and text

Matt Mouley Bouamrane*, Saturnino Luz, Masood Masoodian

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

5 Citations (Scopus)

Abstract

Accessing specific or salient parts of multimedia recordings remains a challenge as there is no obvious way of structuring and representing a mix of space-based and timebased media. A number of approaches have been proposed which usually involve translating the continuous component of the multimedia recording into a space-based representation, such as text from audio through automatic speech recognition and images from video (keyframes). In this paper, we present a novel technique which defines retrieval units in terms of a log of actions performed on spacebased artefacts, and exploits timing properties and extended concurrency to construct a visual presentation of text and speech data. This technique can be easily adapted to any mix of space-based artefacts and continuous media.

Original languageEnglish
Title of host publicationMMM2006
Subtitle of host publication12th International Multi-Media Modelling Conference - Proceedings
Pages360-363
Number of pages4
Publication statusPublished - 2006
MoE publication typeA4 Article in a conference publication
EventInternational Multimedia Modelling Conference - Beijing, China
Duration: 4 Jan 20066 Jan 2006
Conference number: 12

Publication series

NameMMM2006: 12th International Multi-Media Modelling Conference - Proceedings
Volume2006

Conference

ConferenceInternational Multimedia Modelling Conference
Abbreviated titleMMM
CountryChina
CityBeijing
Period04/01/200606/01/2006

Fingerprint Dive into the research topics of 'History-based visual mining of semi-structured audio and text'. Together they form a unique fingerprint.

Cite this