Informed single-channel speech separation using HMM-GMM user-generated exemplar source

Qi Wang, W. L. Woo, S. S. Dlay

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)


We present a new approach for solving the single channel speech separation with the aid of an user-generated exemplar source that is recorded from a microphone. Our method deviates from the conventional model-based methods, which highly rely on speaker dependent training data. We readdress the problem by offering a new approach based on utterance dependent patterns extracted from the user-generated exemplar source. Our proposed approach is less restrictive, and does not require speaker dependent information and yet exceeds the performance of conventional model-based separation methods in separating male and male speech mixtures. We combine general speaker-independent (SI) features with specifically generated utterance-dependent (UD) features in a joint probability model. The UD features are initially extracted from the user-generated exemplar source and represented as statistical estimates. These estimates are calibrated based on information extracted from the mixture source to statistically represent the target source. The UD probability model is subsequently generated to target problems of ambiguity and to offer better cues for separation. The proposed algorithm is tested and compared with recent method using the GRID database and the Mocha-TIMIT database.

Original languageEnglish
Pages (from-to)2087-2100
Number of pages14
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Issue number12
Early online date12 Sept 2014
Publication statusPublished - Dec 2014


Dive into the research topics of 'Informed single-channel speech separation using HMM-GMM user-generated exemplar source'. Together they form a unique fingerprint.

Cite this