X-TaSNet: Robust and Accurate Time-Domain Speaker Extraction Network
Abstract
Extracting the speech of a target speaker from a mixed audio, based on a reference speech from the target speaker, is a challenging yet powerful technology in speech processing. Recent studies of speaker-independent speech separation, such as TasNet, have shown promising results by applying deep neural network over raw time-domain waveform. Such separation neural network does not directly generate reliable and accurate output when a target speaker is explicitly specified, because of the necessary prior on the number of speakers and the lack of robustness when dealing with audios with absent speakers. In this paper, we break these limitations by introducing a new speaker-aware speech masking method, called X-TaSNet. Our proposal adopts new strategies, including a distortion-based loss function and corresponding alternating training scheme, to better address the robustness issue. X-TaSNet significantly enhances the extracted speech quality, tripling SDRi and SI-SNRi of the output speech audio over state-of-the-art voice filtering approach.
X-TaSNet also improves the reliability of the results by improving the accuracy of speaker identity in the output audio to 95.4%, such that it returns a silent audio in most cases when the target speaker is not present. These results demonstrate X-TaSNet moves one solid step towards more practical applications of speaker extraction technology.