Hover over the dots to explore related posts. Closer dots are more semantically related, and the red dot marks the current page.
Hover over the dots to explore related posts. Closer dots are more semantically related, and the red dot marks the current page.
Spoken audio, like any time-continuous medium, is notoriously difficult to browse or skim without support of an interface providing semantically annotated jump points to signal the user where to listen in. Creation of time-aligned metadata by human annotators is prohibitively expensive, motivating the investigation of representations of segment-level semantic content based on transcripts generated by automatic speech recognition (ASR). This paper examines the feasibility of using term clouds to provide users with a structured representation of the semantic content of podcast episodes. Podcast episodes are visualized as a series of sub-episode segments, each represented by a term cloud derived from a transcript generated by automatic speech recognition (ASR). Quality of segment-level term clouds is measured quantitatively and their utility is investigated using a small-scale user study based on human labeled segment boundaries. Since the segment-level clouds generated from ASR-transcripts prove useful, we examine an adaptation of text tiling techniques to speech in order to be able to generate segments as part of a completely automated indexing and structuring system for browsing of spoken audio. Results demonstrate that the segments generated are comparable with human selected segment boundaries.
[1] Marguerite Fuller, Manos Tsagkias, Eamonn Newman, Jana Besser, Martha Larson, Gareth J.F. Jones, and Maarten de Rijke. 2008. Using Term Clouds to Represent Segment-Level Semantic Content of Podcasts. In Proceedings of the 2nd SIGIR Workshop on Searching Spontaneous Conversational Speech (SSCS 2008). UvA Link PDF