Application of imperfect speech recognition to navigation and editing of audio documents

Mark Apperley, Orion Edwards, Sam Jansen, Masood Masoodian, Sam McKoy, Bill Rogers, Tony Voyle, David Ware

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

3 Citations (Scopus)

Abstract

In this paper we describe work in progress; using transcripts of audio files, generated with speech recognition software, to navigate and manipulate audio information. Three applications are being developed. The first is to find and extract speech segments from a longer recording. The second is to support navigation in a recorded lecture; both truncated transcript segments in display bars, and full transcripts are used here. The third application is a novel audio editor, permitting rearrangement of spoken text by direct manipulation of its transcript. The intended use of this software is to allow speakers to amend recorded lectures.

Original languageEnglish
Title of host publicationProceedings - SIGCHI-NZ Symposium on Computer-Human Interaction, CHINZ 2002
Pages97-102
Number of pages6
DOIs
Publication statusPublished - 2002
MoE publication typeA4 Article in a conference publication
EventInternational Conference of the New Zealand Chapter's of the ACM Special Interest Group on Human-Computer Interaction - Hamilton, New Zealand
Duration: 11 Jul 200212 Jul 2002
Conference number: 3

Publication series

NameProceedings - SIGCHI-NZ Symposium on Computer-Human Interaction, CHINZ 2002

Conference

ConferenceInternational Conference of the New Zealand Chapter's of the ACM Special Interest Group on Human-Computer Interaction
Abbreviated titleCHINZ
Country/TerritoryNew Zealand
CityHamilton
Period11/07/200212/07/2002

Keywords

  • audio editing
  • audio navigation
  • large interactive display surface
  • speech recognition

Fingerprint

Dive into the research topics of 'Application of imperfect speech recognition to navigation and editing of audio documents'. Together they form a unique fingerprint.

Cite this