SpoofCeleb: Speech deepfake detection and SASV in the wild

Jung, Jee-weon; Wu, Yihan; Wang, Xin; Kim, Ji-Hoon; Maiti, Soumi; Matsunaga, Yuta; Shim, Hye-jin; Tian, Jinchuan; Evans, Nicholas et al.;
Submitted to ArXiV, 18 September 2024

This paper introduces SpoofCeleb, a dataset designed for Speech Deepfake Detection (SDD) and Spoofing-robust Automatic Speaker Verification (SASV), utilizing source data from real-world conditions and spoofing attacks generated by Text-To-Speech (TTS) systems also trained on the same real-world data. Robust recognition systems require speech data recorded in varied acoustic environments with different levels of noise to be trained. However, existing datasets typically include clean, high-quality recordings (bona fide data) due to the requirements for TTS training; studio-quality or well-recorded read speech is typically necessary to train TTS models. Existing SDD datasets also have limited usefulness for training SASV models due to insufficient speaker diversity. We present SpoofCeleb, which leverages a fully automated pipeline that processes the VoxCeleb1 dataset, transforming it into a suitable form for TTS training. We subsequently train 23 contemporary TTS systems. The resulting SpoofCeleb dataset comprises over 2.5 million utterances from 1,251 unique speakers, collected under natural, real-world conditions. The dataset includes carefully partitioned training, validation, and evaluation sets with wellcontrolled experimental protocols. We provide baseline results for both SDD and SASV tasks. All data, protocols, and baselines are publicly available at https://jungjee.github.io/spoofceleb


Type:
Journal
Date:
2024-09-18
Department:
Digital Security
Eurecom Ref:
7886
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted to ArXiV, 18 September 2024 and is available at :
See also:

PERMALINK : https://www.eurecom.fr/publication/7886