Disentangled Representation Learning for Multilingual Speaker Recognition

Kihyun Nam1*, Youkyum Kim1*, Jaesung Huh2, Hee-Soo Heo3, Jee-weon Jung4, Joon Son Chung1

1Korea Advanced Institute of Science and Technology / Republic of Korea.

2University of Oxford / United Kingdom.

3Naver Corporation / Republic of Korea.

4Carnegie Mellon University / USA.

Abstract

The goal of this paper is to learn robust speaker representation for bilingual speaking scenario. The majority of the world’s population speak at least two languages; however, most speaker recognition systems fail to recognise the same speaker when speaking in different languages.

Popular speaker recognition evaluation sets do not consider the bilingual scenario, making it difficult to analyse the effect of bilingual speakers on speaker recognition performance. In this paper, we publish a large-scale evaluation set named VoxCeleb1-B derived from VoxCeleb that considers bilingual scenarios.

We introduce an effective disentanglement learning strategy that combines adversarial and metric learning-based methods. This approach addresses the bilingual situation by disentangling language-related information from speaker representation while ensuring stable speaker representation learning. Our language disentangled learning method only uses language pseudo-labels without manual information.

The VoxCeleb1-B Dataset

VoxCeleb1-B dataset is a large-scale bilingual evaluation set, focusing on bilingual speaking scenarios. VoxCeleb1-B consists of 808,574 trials and 15 languages.

Download

Related Links

Other VoxCeleb datasets can be found here