Ninth International Conference on Spoken Language Processing

Pittsburgh, PA, USA
September 17-21, 2006

Automatic Metadata Generation and Video Editing Based on Speech and Image Recognition for Medical Education Contents

Satoshi Tamura (1), Koji Hashimoto (1), Jiong Zhu (1), Satoru Hayamizu (1), Hirotsugu Asai (2), Hideki Tanahashi (2), Makoto Kanagawa (3)

(1) Gifu University, Japan; (2) Gifu Prefectural Research Institute of Manufacturing Information Technology, Japan; (3) Sanyo Electric Co. Ltd., Japan

This paper reports a metadata generation system as well as an automatic video edit system. The metadata are information described about the other data. In the audio metadata generation system, speech recognition using general language model (LM) and specialized LM is performed to input speech in order to obtain segment (event group) and audio metadata (event information) respectively. In the video edit system, visual metadata obtained by image recognition and audio metadata are combined into audio-visual metadata. Subsequently, multiple videos are edited to one video using the audio-visual metadata. Experiments were conducted to evaluate event detection of the systems using medical education contents, ACLS and BLS. The audio metadata system achieved about a 78% event detection correctness. In the edit system, an 87% event correctness was obtained by audio-visual metadata, and the survey proved that the edited video is appropriate and useful.

Full Paper

Bibliographic reference.  Tamura, Satoshi / Hashimoto, Koji / Zhu, Jiong / Hayamizu, Satoru / Asai, Hirotsugu / Tanahashi, Hideki / Kanagawa, Makoto (2006): "Automatic metadata generation and video editing based on speech and image recognition for medical education contents", In INTERSPEECH-2006, paper 1132-Thu2WeO.4.