Model-based processing for acoustic scene analysis
Document typeConference report
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Rights accessRestricted access - publisher's policy
The analysis of acoustic scenes requires several functionalities, being perhaps recognition (speech, speaker, other acoustic events) and spatial localization the two most relevant ones. For a reduced invasiveness, the microphones are far away from the sound sources, and possibly grouped in arrays, which may be distributed, not arranged, in the room. Aiming at an increased performance, the usual model- based approach employed for sound recognition or detection can be extended to other co-occurrent tasks like source localization, so both tasks can be carried out jointly, using the same formulation and processing. In this paper, we intend to illustrate that point by presenting together a few new model-based techniques that deal with the problems of overlapped-sounds recognition, multi-source localization, and channel selection. They are briefly described, and tested in a smart-room environment with a multiple microphone- array setup.
CitationNadeu, C., Chakraborty , R., Wolf, M. Model-based processing for acoustic scene analysis. A: European Signal Processing Conference. "2014 Proceedings of the 22nd European Signal Processing Conference (EUSIPCO): 1-5 September 2014: Lisbon, Portugal". Lisbon: Institute of Electrical and Electronics Engineers (IEEE), 2014, p. 2370-2374.
|Model-based pro ... coustic scene analysis.pdf||Model-based processing for acoustic scene analysis||371.8Kb||Restricted access|