Demo Page of VAE-NMF

Statistical Speech Enhancement Based on Probabilistic Integration of Variational Autoencoder and Non-Negative Matrix Factorization

Y. Bando, M. Mimura, K. Itoyama, K. Yoshii, and T. Kawahara

Abstract: This paper presents a statistical method of single-channel speech enhancement that uses a variational autoencoder (VAE) as a prior distribution on clean speech. A standard approach to speech enhancement is to train a deep neural network (DNN) to take noisy speech as input and output clean speech. Although this supervised approach requires a very large amount of pair data for training, it is not robust against unknown environments. Another approach is to use non-negative matrix factorization (NMF) based on basis spectra trained on clean speech in advance and those adapted to noise on the fly. This semi-supervised approach, however, causes considerable signal distortion in enhanced speech due to the unrealistic assumption that speech spectrograms are linear combinations of the basis spectra. Replacing the poor linear generative model of clean speech in NMF with a VAE---a powerful nonlinear deep generative model---trained on clean speech, we formulate a unified probabilistic generative model of noisy speech. Given noisy speech as observed data, we can sample clean speech from its posterior distribution. The proposed method outperformed the conventional DNN-based method in unseen noisy environments.

Fig. 1: Overview of our speech enhancement model.

Fig. 2: VAE representation of a speech spectrogram.

Enhancement results for real recordings

NOTE: the following results were not presented in the paper because we used simulated speech signals for SDR evaluation.

Enhancement results for CHiME-3 development set

NOTE: DNN-IRM was trained using the noisy data recorded in the same environments as the following signals.

BUS (on a bus) condition
Input
VAE-NMF
DNN-IRM
RPCA
CAF (in a cafe) condition
Input
VAE-NMF
DNN-IRM
RPCA
PED (in a pedestrian area) condition
Input
VAE-NMF
DNN-IRM
RPCA
STR (on a streen junction) condition
Input
VAE-NMF
DNN-IRM
RPCA

Enhancement results for DEMAND dataset

NOTE: the noisy environments in this dataset are unknown to DNN-IRM.

SUB (on a subway) condition
Input
VAE-NMF
DNN-IRM
RPCA
CAF (in a cafe) condition
Input
VAE-NMF
DNN-IRM
RPCA
SQU (at a town square) condition
Input
VAE-NMF
DNN-IRM
RPCA
LIV (in a living room) condition
Input
VAE-NMF
DNN-IRM
RPCA

Reference

[1] J. Barker+, "The third 'CHiME' Speech Separation and Recognition Challenge: Dataset, task and baselines," IEEE 2015 Automatic Speech Recognition and Understanding Workshop (ASRU), 2015, [Link].
[2] J. Thiemann+, "DEMAND: Diverse Environments Multichannel Acoustic Noise Database," [Link].