Towards Universal Mono-to-Binaural Speech Synthesis
We consider the problem of synthesis of binaural speech from mono audio in arbitrary environments, which is important for modern telepresence and extended-reality applications. We find that existing neural mono-to-binaural methods are overfit to non-spatial acoustic properties, via analysis using a new benchmark (TUT Mono-to-Binaural), the first introduced since the original dataset of Richard at el. (2021). While these past methods focus on learning neural geometric transforms of monaural audio, we propose BinauralZero, a strong initial baseline for universal mono-to-binaural synthesis, which also matches or outperforms existing state-of-the-art neural mono-to-binaural renderers in their own environments despite never seeing any binaural data. It leverages the surprising discovery that an off-the-shelf mono audio denoising model can competently enhance the initial binauralization given by simple parameter-free transforms. We perform comprehensive ablations to understand how BinauralZero bridges the representation gap between mono and binaural audio, and analyze how current mono-to-binaural automated metrics are decorrelated from human ratings.
Model
Binaural Speech Dataset
Mono | BinauralZero | WarpNet | BinauralGrad | NFS | Ground Truth |
---|---|---|---|---|---|
TUT Mono to Binaural Dataset
Mono | BinauralZero | WarpNet | BinauralGrad | NFS | Ground Truth |
---|---|---|---|---|---|