Abstrakti
Accessing specific or salient parts of multimedia recordings remains a challenge as there is no obvious way of structuring and representing a mix of space-based and timebased media. A number of approaches have been proposed which usually involve translating the continuous component of the multimedia recording into a space-based representation, such as text from audio through automatic speech recognition and images from video (keyframes). In this paper, we present a novel technique which defines retrieval units in terms of a log of actions performed on spacebased artefacts, and exploits timing properties and extended concurrency to construct a visual presentation of text and speech data. This technique can be easily adapted to any mix of space-based artefacts and continuous media.
Alkuperäiskieli | Englanti |
---|---|
Otsikko | MMM2006 |
Alaotsikko | 12th International Multi-Media Modelling Conference - Proceedings |
Sivut | 360-363 |
Sivumäärä | 4 |
Tila | Julkaistu - 2006 |
OKM-julkaisutyyppi | A4 Artikkeli konferenssijulkaisussa |
Tapahtuma | International Multimedia Modelling Conference - Beijing, Kiina Kesto: 4 tammik. 2006 → 6 tammik. 2006 Konferenssinumero: 12 |
Julkaisusarja
Nimi | MMM2006: 12th International Multi-Media Modelling Conference - Proceedings |
---|---|
Vuosikerta | 2006 |
Conference
Conference | International Multimedia Modelling Conference |
---|---|
Lyhennettä | MMM |
Maa/Alue | Kiina |
Kaupunki | Beijing |
Ajanjakso | 04/01/2006 → 06/01/2006 |