Text-to-speech synthesis in the wild

Jung, Jee-weon; Zhang, Wangyou; Maiti, Soumi; Wu, Yihan; Wang, Xin; Kim, Ji-Hoon; Matsunaga, Yuta; Um, Seyun; Tian, Jinchuan; Shim, Hye-jin; Evans, Nicholas; et al.
Submitted to ICASSP 2025 / Also submitted to ArXiV, 13 September 2024

Text-to-speech (TTS) systems are traditionally trained using modest databases of studio-quality, prompted or read speech collected in benign acoustic environments such as anechoic rooms. The recent literature nonetheless shows efforts to train TTS systems using data collected in the wild. While this approach allows for the use of massive quantities of natural speech, until now, there are no common datasets. We introduce the TTS In the Wild (TITW) dataset, the result of a fully automated pipeline, in this case, applied to the VoxCeleb1 dataset commonly used for speaker recognition. We further propose two training sets. TITW-Hard is derived from the transcription, segmentation, and selection of VoxCeleb1 source data. TITW-Easy is derived from the additional application of enhancement and additional data selection based on DNSMOS. We show that a number of recent TTS models can be trained successfully using TITW-Easy, but that it remains extremely challenging to produce similar results using TITW-Hard. Both the dataset and protocols are publicly available and support the benchmarking of TTS systems trained using TITW data.


Type:
Conference
Date:
2024-09-13
Department:
Digital Security
Eurecom Ref:
7870
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted to ICASSP 2025 / Also submitted to ArXiV, 13 September 2024 and is available at :
See also:

PERMALINK : https://www.eurecom.fr/publication/7870