Improving detection of acoustic events using audiovisual data and feature level fusion
View/Open
Cita com:
hdl:2117/85340
Document typeConference report
Defense date2009
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
The detection of the acoustic events (AEs) that are naturally
produced in a meeting room may help to describe the human
and social activity that takes place in it. When applied to
spontaneous recordings, the detection of AEs from only audio
information shows a large amount of errors, which are mostly due to temporal overlapping of sounds. In this paper, a system to detect and recognize AEs using both audio and video information is presented. A feature-level fusion strategy is used, and the structure of the HMM-GMM based system considers each class separately and uses a one-against-all strategy for training. Experime ntal AED results with a new and rather spontaneous dataset are presented which show the advantage of the proposed approach.
CitationButko, T., Canton, C., Segura, C., Giro, X., Nadeu, C., Hernando, J., Casas, J. Improving detection of acoustic events using audiovisual data and feature level fusion. A: Annual Conference of the International Speech Communication Association. "ISCA-INST Speech Communication Association". 2009, p. 1147-1150.
ISBN978-1-61567-692-7
Publisher versionhttp://www.isca-speech.org/archive/interspeech_2009/i09_1147.html