Simultaneous speech detection with spatial features for speaker diarization
Rights accessRestricted access - publisher's policy
Simultaneous speech poses a challenging problem for conventional speaker diarization systems. In meeting data, a substantial amount of missed speech error is due to speaker overlaps, since usually only one speaker label per segment is assigned. Furthermore, simultaneous speech included in training data can lead to corrupt speaker models and thus worse segmentation performance. In this paper, we propose the use of three spatial cross-correlation-based features together with spectral information for speaker overlap detection on distant microphones. Different microphone-pair data are fused by means of principal component analysis. We have obtained an improvement of the speaker diarization system over the baseline by discarding overlap segments from model training and assigning two speaker labels to them according to likelihoods in Viterbi decoding. In experiments conducted on the AMI Meeting corpus, we achieve a relative DER reduction of 11.2% and 17.0% for single- and multi-site data, respectively. The improvement of clustering with techniques such as beamforming and TDOA-feature stream also leads to a higher effectiveness of the overlap labeling algorithm. Preliminary experiments with NIST RT data show DER improvement on the RT'09 meeting recordings as well.
CitationZelenak, M. [et al.]. Simultaneous speech detection with spatial features for speaker diarization. "IEEE transactions on audio speech and language processing", Febrer 2012, vol. 20, núm. 2, p. 436-446.